A lot has been said recently about ‘post-truth’. If you’re at all interested in understanding our culture, the term seems to show up everywhere. The usual narrative runs like so:
Each of us lives in our own bubble. Increasingly, we become so secure in our bubbles that we start accepting only information, whether it’s true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there. As a result, our individual abilities to separate accurate ideas from wrong-headed assertions are deteriorating. All we do is reject evidence that contradicts our favorite politician by declaring the source to be unreliable on the very grounds that it tells a different story than the one we’d like to be true. Consequently, we’re poorly informed and, more and more, unable to spot false claims for what they are.
For example, in The Death of Expertise, Cornell philosopher Shaun Nichols worries that “the average American” has base knowledge so low it has crashed through the floor of “uninformed”, passed “misinformed” on the way down, and is now plummeting to “aggressively wrong”. And this is playing out against a backdrop in which people don’t just believe “dumb things”, but actively resist any new information that might threaten these beliefs. In True Enough journalist Farhad Manjoo similarly laments that “people are more apt to be wrong about basic things than ever before.”
In this essay, I want to evaluate these claims.
I first explore the theory. Why, according to these people, are we living in a “post-truth era”? Is it true that many of us have “aggressively wrong” beliefs? I follow-up with a deep-dive into the data. How dangerousare the effects of selective exposure, filter bubbles, fake news, and so on? Are we losing touch with the truth?
Making sense of post-truth
Information — everything you know about the world — was once gathered and disseminated by a handful of trusted institutions (mainstream media and the academy). These days, however, whole segments of the population have dismissed all neutral authority on ‘truth’ and ‘facts’ as untrustworthy.
As a result, citizens of diverse political persuasions can increasingly live in their own bubbles, consuming only views similar to their own, and rationalize falsehoods by simply rejecting contradicting data by rejecting their source. “Each group lives in its own echo chamber,” author Eli Pariser thus submits in The Filter Bubble. “Which it believes is the “true” reality.”
The fear is that this tendency goes way beyond innocent cherry-picking:people will genuinely forget the difference between ‘opinions’ and ‘facts’. Locked in a self-reinforcing bubble, we believe what we want to believe andmaintain that whatever fantasies we’ve concocted have an equal right to be called ‘true’ as everything else.
By insisting that all the fact-checkers and hypothesis testers there are phonies, these people discredit the very possibility of a socially validated reality, and open the door to tribal knowledge, personal knowledge, partisan knowledge, and other manifestations of epistemic anarchy.
The proof is in the pudding
The concern is that all these interlocking developments have combined to create a maelstrom of unreason that’s thwarting rational debate and spreading an epidemic of misinformation.
Now, that sounds like an intelligent — and fearsome! — story. But, so far, it’s purely theoretical. A reality check is indispensable.
Anti-intellectualism, some people’s rejection of knowledge-gathering transpartisan authorities and shared objective standards for truth, lying politicians, fake news, increasing possibilities for technological control — these are all real things. But how impactful are they? Are we as poorly informed and susceptible to manipulation as Praiser and others presume we are? Does fact-free spin and propaganda really work so well, on so many people?
And there were lies
One inspiration for coining the term “post-truth era” has been the observation that some politicians seem not to care so much for the truth. Yet that’s hardly enough to justify labeling ours a post-truth era. Politicians have always lied.
For instance, the expression “credibility gap” had its heyday during the administration of Lyndon Johnson in the 1960s. As popular science author Steven Pinker recalls:
The bending or inverting of truth by people in power has long been consequential, leading, for example, to the Spanish-American war, the First World War, the Vietnam War, and the Iraq War, right up to the near miss in the Persian Gulf in 2019. — Steven Pinker, Why We Are Not Living in a Post‑Truth Era
Gesturing at lying politicians, then, doesn’t support post-truth diagnoses. Truth and power have been in a troubled relationship since long before 2019.
Another factor contributing to the post-truth impression has been, as we saw, the recent prominence of fake news. However, this too is hardly a new phenomenon: spreading false information is a practice as old as the hills. The title of the forthcomingvolumeFake News Nation: The Long History of Lies and Misinterpretations in America, is self-explanatory, though the long history is by no means confined to America. Pinker once more:
The Protocols of the Elders of Zion, the hoaxed proceedings of a secret meeting of Jews plotting global domination, was advanced as fact by a number of prominent people in subsequent decades, including the industrialist Henry Ford. Countless pogroms, lynchings, and deadly ethnic riots have been sparked by rumors of the alleged perfidy of some minority group.
If today is the era of post-truth, when, exactly, was the halcyon age of truth? Is there such a thing as unbiased reporting? Has there ever been?
The truth is, truth has never been high on the agenda of homo sapiens.
You might object that all this leaves out the most crucial ingredient of post-truth anxieties: the increased role of online environments. Fake news, perhaps, spreads faster and infiltrates our information ecology deeper than ever before. Many worries have been had over filter bubbles and digital manipulation, two distinctly digital phenomena.
Let’s take these points in turn, starting with the former.
No need to worry about filter bubbles
In his excellent Aeon Magazinepiece, philosopher C Thi Nguyen defines the bubble after which Eli Praiser named his book as “an informational network from which relevant voices have been excluded by omission.” When we take networks built for social reasons and start using them as our information feeds, the initial thought goes, we miss out on contrary standpoints and run into exaggerated degrees of agreement.
The deeper fear is that the algorithms of Facebook and Google provide us with personalized search results and news feeds, so that we are less confronted with world views, opinions or facts that do not fit with what we already believe. This is the story Praiser puts forward. It has been echoed by MIT Media Lab founder Nicholas Negroponte (Being Digital) and legal scholar Cass Sunstein (Republic.com 2.0), who warned about the Web turning into everybody’s narcissistic “Daily Me” feed.
Is this long-standing concern becoming an actual problem?
Eli Pariser’s favorite factoid in support of his hypothesis, with which he starts his book, is that Google now personalizes search results according to 57 different signals. He had some friends Google some terms, and they got different results. Since it’s just one case, this is hardly compelling evidence as such.
Yet, Pariser believes that Google’s 57 varieties or amount to ideological frames. However, independent analysts aren’t seeing the problematic import Praiser attaches to this fact. Here’s Jonathan Zittrain, a professor of law and computer science at Harvard: “In my experience, the effects of search personalization have been light.” There are also anecdotes that cancel out the one Praiser reports. Political journalist Jacob Weisberg, for instance, reports an experiment in which he asked politically diverse Twitter followers to search for ideologically loaded terms and got almost identical results.
It seems the Internet isn’t fostering mental rabbit warrens just yet.
Pariser is also mistaken, it seems to me, in assuming that personalization narrows our perspectives rather than broadening them. Through most of history, information filters have been imposed involuntarily. By contrast, wouldn’t it make sense that our growing access to people and data increasesexposure to viewpoints deviating from our familiar diet of opinions? Just ask your grandmother how many non-Christians or non-Muslims or non-Hindus, or whatever — you get the point — she had talked to when she was your age.
Just look at Twitter, the alleged symbol of divide: it confronts you with people who don’t think like you on a daily basis.
To support this picture, studies have shown that conservative and liberal bloggers link to each other to a surprising degree. And new data does, in fact, seem to show that people on Facebook see posts from the other side and that people regularly visit websites with opposite political affiliation.
If that’s right, then filter bubbles might not be such a serious threat. The pattern of selective exposure that has Pariser and co worried is heavily concentrated among a small subset of people.
Digital manipulation is ineffective
Likewise, the belief that fake news is displacing truth itself needs to be examined for its truth.
What, to begin, is fake news? It is not merely false information conveyed by reportage. As the word ‘fake’ suggests, fake news requires intentional deception; honest reporting errors are not fake news. Fake news is when committed partisans try to erode their opponents’ support by trickingpersuadable voters. Or when, for example, shady groups with links to Russian military intelligence deliberately manufacture (online) untruths. For these creators, fake news needs to travel widely not only to generate clicks, but also to change minds.
Let’s follow the by-now trusted recipe and ask for proof: how susceptible arewe? Is digital deception making us believe false things?
I believe the answer is “No”, for three reasons.
The first piece of data: in their analysis of fake news in the 2016 American presidential election, three social scientists found that it took up a minuscule proportion of the online communications (far less than 1 percent) and was mainly directed at those with extremely views, willing to accept anything as long as it’s good for their side. It takes a particular psyche to be tricked into accepting fake news as true. The vast majority of people didn’t fall for these hoaxes.
And while the risk of information polarization of course remains, the percentage of the public saying that media outlets try to report news without bias jumped from 23% to 43% between 2016 and 2017 (according to polling by the Freedom Forum Institute).
Finally, more and more studies have been published by political scientists who experimented with interventions to change people’s politics. The results are consistent: digital manipulation doesn’t work. Want to use advertising to influence voters during the American elections? “We argue that the best estimate of the effects of campaign contact and advertising on Americans’ candidate choices in general elections is zero.” Thinking of spreading biased information about the of of election candidates? “We findno evidence overall that typical, nonpartisan voter information campaigns shape voter behavior.” And what about information about corrupt politicians? “The aggregate treatment effect or corruption information on vote share in field experiments is approximately zero.”
I think there’s only one conclusion. For fake news to get to you, you have to already be marinated in a partisan fever swamp. For you to ‘mistake’ dubious information as the truth, it seems you have to do so willingly. When you consume fake news as news, there’s no one playing a number on you except yourself.
The combination of rising partisanship, ostensible disregard for the facts, and technological advancements have created fears of widespread “echo chambers” and “ﬁlter bubbles”. Their presence, the fear goes, would cause growing ignorance and stupidity.
I’ve argued these warnings are overstated.
For one, behavioral data indicate that only a tiny subset of people have profoundly skewed media consumption patterns. Rather than turning us into solipsistic twits, digitalization breaks down barriers between groups who traditionally subscribe to clashing worldviews
Moreover, there is no reason to think the digitalization of information undermines the ability of sane folks to differentiate what’s likely sound from what’s almost certainly not. Concerns about digitally manufactured lies getting the better of us are understandable, but they seem to miss the mark.
Not to worry: you’re neither in a filter bubble, nor being tricked into believing false things (unless you want to be). Perhaps filter bubbles sound bulletproof in theory. In practice, the bullets pop them all too easily.