Reasons Why Post-truth Claims Are Plausible, and Evidence That They Are Not true

Part I.

“The world that we have to deal with is out of reach, out of sight, out of mind. It has to be explored, reported, and imagined. [The media dominate this creation of pictures in our heads, because] they are the principal connection between events in the world and the images in the minds of the public.”

–         Walter Lippmann, The World Outside and the Pictures in Our Heads (1922)

Your chemistry teacher explaining that water consists of two hydrogen atoms and one oxygen atom. Your national health agency calling you with the announcement that there is a virus in your body. The NASA claiming that people walked on the moon in 1969.

You have not seen nor possess the evidence for the truth of those claims, but believe them anyway. Because you trust your teacher, the scientist, the institution.

Which makes them pieces of indirect information – knowledge (a) gathered by others that we adopt as our own (b) because they share it with us. Just like every statistic you come across, every study you learn about, everything you read in a textbook, everything you learn in school, everything you learn from your parents, every book you read, everything you see or read in the news, everything you read on social media, everything you hear a politician or celebrity say, every assumption of conventional wisdom.

In the same way that, for example, those virus particles in your own body must be (a) detected by someone else (b) who tells you about their presence, most world events and institutional actions are barely visible to the average citizen either. For these things, we also get our information through a mediating party that we will have to trust.

Enter: the media.

After all, people do not come into contact with all kinds of societal issues themselves, but inform themselves through media channels. Almost everything you read, see or hear about the outside world comes to you through such a trust channel. That there were protests in Lebanon a while back has probably only reached you because the news mentioned it. And so on.

In short, individuals do not have automatic access to knowledge, but so-called gatekeepers (such as journalism) make this connection possible by (a) recording and (b) communicating events. Seen from this perspective, it is not surprising that more media coverage makes for a more informed electorate and that citizens who trust the media more are generally better informed.

The former gatekeepers

“[Before the internet, the domains of] information and communication saw a caste of mediators, under the motto “All the news that’s fit to print,” arbitrarily control the content available to ordinary persons.”

–         Martin Gurri, The Revolt of The Public and the Crisis of Authority in the New Millennium (2018)

Indirect knowledge is useful, but also creates dependence. As we only know most things about the world when gatekeepers tell us about it, they have (let’s call it) worldview-producing power. To a large extent, their choices determine which information is made, how that information is distributed and what information we actually get to see.

Professor Robert Entman explains that the media affect what people think “by providing much of the information people think about and by shaping how they think about it.” They have a major influence on our worldview, because (1) they guide what subjects we think about in the first place and (2) they also shape how we think about them.

Moreover, managing this access to knowledge and information is always a process of selection. The news is only twenty minutes long (where I’m from) and the newspaper still has to fit through the letterbox. Many things we see, we only see through the media – but that indirect access also provides limited visibility.

Communication, plus trust, is a magical intellectual corner-cutting tool. But when applied wrongly, trust does the opposite. The reliability of much of what I believe therefore depends in large part on how well the gatekeepers do their job.

That doesn’t have to be a bad thing, by the way. I don’t need to know most of the things that are going on. It is useful when someone makes a selection. As long as this selection is representative and it gives me a true worldview.

In the same way, it would be rather inconvenient for the gatekeepers with worldview-producing power to be guided by criteria other than accuracy, so that the information passing through the gate is biased or untrue.

And indeed, despite the mentioned positive relationship between media coverage and informedness, we should not pretend that the media is a purely neutral conduit, as it were, of the underlying reality.

That’s where the problems start.

From a digitalized information sphere to a post-truth society

“The key risk of our post-truth era is not that facts really will disappear and never be heard from again, [but] that the particular, old-fashioned mythology around truth […] will turn out to have outlived its relevancy and its appeal, and we will have nothing to put in its place.”

–         Sophia Rosenfeld, Democracy and Truth: A Short History (2018)

Until recently, large media companies had a monopoly on both the camera and the printing press. They were the only one with both the means to record events and the reach to serve a large paying audience with their reports. In other words, only they could (a) capture and (b) communicate information on a serious scale.

This is no longer the case.

Nowadays, anyone with a cell phone can (a) document facts, and anyone with an internet connection can (b) reach other people by uploading it, liking it, sharing it. Online, you don’t have to convince an editor to have a chance at getting audience. Anyone can start (news) websites, maintain blogs or do live journalism via social media.

As a result, the gatekeepers of yesteryear no longer have a monopoly on information (a) production and (b) dissemination.

In this way, the internet makes traditional gatekeepers such as journalism superfluous. Information no longer needs to be mediated, but is to a much greater extent directly accessible to everyone.

Which is why it is mainly the traditional information monopolists who sometimes complain that we live in a post-truth society: their power to direct the attention of the public to certain aspects of reality and leave other facts underexposed is no longer absolute.

The information apocalypse and the democratization of knowledge

“[T]he best test of truth is the power of the thought to get itself accepted in the competition of the market … That at least is the theory of our Constitution.”

–        Oliver Wendell Holmes, Abrams v. United States (250 U.S. 616 [1919])

Just as the printing press revolutionized the world because it caused an explosion in the amount of information produced and disseminated, we live in similarly revolutionary times today, largely driven by digital technology. The amount of information has always grown slowly and linearly, but is increasing exponentially since the internet enables anyone to produce it. In fact, more information was generated in 2001 than in all the previous existence of our species on earth. The amount present was then doubled in 2002. The volume of information is now doubling every year.

So it seems difficult to maintain that the advent of the internet and social media has not brought about fundamental changes in how we interact with knowledge. Accordingly, what’s at stake in concepts such as post-truth is who, in this revolution, will be the new gatekeepers of the facts and how they will mediate our access to information.

Connected to this shift, I recently read a report that made the following claims:

In the physical world, papers lie side by side on the newsstand, and consumers can estimate their reliability based on the reputation of a title or brand. But in the digital world, messages are distributed ‘loosely’ through search engines and platforms, and are displayed along with ads, clickbait, entertainment and disinformation. This (1) makes reliability more difficult for readers to assess. Furthermore, (2) the incentive for publishers to invest in their reputation through quality is disappearing, because potential pay-offs do not end up with them but with search engines and platforms [as Google and Facebook control over 50% of the online advertising market in the US].

As wel’ll see later, I think these two conclusions – about (1) hard reliability assessments and (2) disappearing incentives to invest in quality – are insufficiently backed by evidence.

Nevertheless, the idea that digitization leads to (1) more mistakes in gauging about the reliability of sources and (2) less (investment in) high-quality information is not illogical, and quite widespread. The claim digitization of our information landscape leads to its pollution, as clickbait and fake news run wild and we believe more and more untrue things, does in fact makes sense for a number of reasons.

Yet it’s not actually true.

This essay presents and dissolves that paradox.

Part II.

The attention economy

“As more readers move towards online social networks, and as publishers desperately seek scale to bring in revenue, many [newspapers] have deplored a race towards repetitive, trivial journalism, so noisy that it drowns out more considered work.”

–         Ravi Somaiya (2015), journalist at The New York Times

The fact that information is now much more directly accessible than in the past has disastrous consequences for the business model of the traditional gatekeepers – and therefore for their chances of survival.

Back in the days, it made sense pay for the access to information that newspapers offered you. Today, however, you can also go to news websites that offer their reporting for free. That is why the paid printed circulation of most newspapers in most countries has halved since the beginning of this century.

So then how do they survive?

“The current online business model of media organizations is almost exclusively focused on advertising revenue,” we read in New challenges for journalism in the age of fake news.

Which in turn begs the question: how does one maximize those?

You need two things for that: get people’s attention and get them to share. Visitors should stay on the website for as long as possible and bring in other eyeballs. Some media even have click-targets for their employees, where journalists are paid based on how many people were enticed to click on their headlines.

The price of information is no longer money, but attention and personal data.

That said, it’s not exactly easy, nowadays, to get people to find their way to your article, because, in our information era, there’s more content out there than ever before.

So how do you ensure that people in end up at your website, in the process ensuring your survival as an information-supplying company? Now that access to knowledge is mediated less by the traditional gatekeepers, who controls that gigantic flow of information?

Part III.

The supply of information: How digitization, in theory, leads to a polluted information landscape (i)

“An economy ruled by online advertising has produced its own theory of truth; truth is whatever produces the most eyeballs.”

Evgeny Morozov (2017)

According to Rob Wijnberg, philosopher and founder of The Correspondent, Google and Facebook have managed to place a “historically unparalleled claim on our information provision” here. Because, he reasons, almost all the information in our lives comes to us through their channels – the search engine, the social medium or one of their subsidiaries. As such, their algorithms are “the hidden source code of our picture of the world: what news we see, what search results we see, which sources are labelled as trustworthy or fake. In short, they largely determine what we believe, know and think.”

It is no longer the decisions of editors that determine what information is made, how that information is distributed and what we see or not see, but the ‘decisions’ of a secret algorithm. And just as there are only so many pages in a newspaper and so many minutes in the news, what knowledge these algorithms make accessible is still a selection process. Not everything fits on the first page of Google or at the top of your timeline.

The design of algorithms can therefore – just like the format of the news and the newspaper – facilitate or hinder truth-finding. As before, this depends on whether the gatekeepers’ selection criteria produce a truthful picture of reality or whether, alternatively, their worldview-producing power is directed by other criteria than accuracy.

In addition to that similarity, there’s also a crucial difference between the old and the new gatekeepers. In the old days, media companies determined not only what information we had access to, but also what information was produced at all. Nowadays, those two positions of power are no longer so intertwined. This is because everyone now has access all the knowledge available on the internet, regardless of the editorial choices or the political or commercial interests of a specific information-providing company. Unlike media companies, then, platform companies such as Google hardly produce information themselves, but only offer access to information created by others.

Nevertheless, the way in which, for example, Google makes this selection does still influence what information gets made in the first place. What the information-selecting algorithms prefer, circles back to what information othey parties will produce. After all, they have an incentive to produce content that scores high on the ranking algorithm. Otherwise, the article will not bring in ad revenue. So platform companies still have an (indirect) influence of platform companies on the production of information.

The key question then is: what kind of content does the algorithm reward with visibility and which content is doomed to the anonymity of the exponentially growing mountain of information?

Algorithms for online content selection are designed to maximize circulation, as it makes sense from an advertising point of view to privilege messages that reach many users in a short time, thereby enabling them to reach yet more users. So they select on engagement – on linkability, shareability and popularity. A post on Facebook, for example, will end up in your timeline faster and more prominently if it is liked and shared a lot. Google isn’t about truth either: search “flat earth” and it dutifully returns photoshopped photos of a 150-foot ice wall that prevents us from sliding off the planet. A commercial logic here gives rise to a circular content economy, one without referent: content that gets shared a lot is rewarded with more visibility, thereby increasing its shareability.

No surprise, then, that traditional news media are guided in their choices and presentation by what is most liked, shared or retweeted.

The supply of information: How digitization, in theory, leads to a polluted information landscape (ii)

“[…] trying to attack the totality of possible eyeballs on the Internet, [we] lost the things that make publications great.”

–         Joshua Topolsky (2015), founder of The Verge

So: online platforms reward messages that spread instantly and widely with even more visibility. What does that mean, concretely?

The reason people link the digitization of the information landscape to its pollution, is that they fear the algorithms prefer information of low quality. Which, after enough selection pressure, will lead to a situation in which only those articles populate our information landscape, as companies hammering down on those will have bankrupted their information-supplying competitors.

As tabloid newspapers invested in maximizing advertising revenue also found out in previous decades, sensational rather than factual content turns out to satisfy this criterion of maximal shareability best. Philosopher Noortje Marres, for example, even claims that all major platforms have been shown to prefer scandalous and extremist material over balanced links. That would not be good news for the informational health of our society.

It gets worse. A study comparing 126,000 messages on Twitter showed that false messages spread faster and further on Twitter. It took the “truth” about six times as long as “falsehood” to reach 1,500 people on Twitter. And on Facebook in the three months leading up to the 2016 election, Buzzfeed found, the 20 “best performing” fake stories generated more likes, shares and comments than the 20 most popular true news articles.

No wonder, then, that the report was talking about the disappearance of the incentives to invest in quality.

All things considered, the situation appears to be as follows. There is more information than ever. Instead of editors, algorithms now determine which info we see, and indirectly also which is made. They reward articles that get a lot of clicks and shares with even more visibility, so that they are shared even more, and so on. In this vicious circle, truth and quality are sidelined: articles that are made with the aim of generating engagement do not have to be correct or good to achieve that goal. And indeed: low-quality (mis)information does actually score higher on this criterion.

When it comes to the supply side of the information market, then, it seems the incentives are such that we should expect the companies producing such content to survive while the rest dies out.

With all its drawbacks. If our perceptions of reality are increasingly informed by media with other-than-truth motivations, we’ll increasingly lose our handle on the truth. And knowledge is power, wrote Francis Bacon, so when people are stripped of knowledge, they are stripped of power, too. And when ordinary citizens don’t have the power, that means someone else does. So without reliable information the citizen is powerless and democracy is eroded.

Accordingly, since Brexit and the election of Donald Trump, there seems to be no end to the stream of publications documenting how entire societies threaten to collapse under the pressure of the damaging effects of social media, fake news, alternative facts, filter bubbles, micro-targeted (political) advertisements and other post-truth symptoms. Many American media have already concluded that facts “have died” and that it is time to “give up on facts”.

This is the other meaning of that unclear term ‘post-truth’. We, it is said, live in a time in which ideology and emotion are more important than facts, and in which the media are full of fake news. According to this reasoning, the current information landscape leaves no room whatsoever for truth-finding: we cannot inform ourselves properly, we mainly receive selective information that is also of poor quality, and we are easily tricked into believing something false.

The demand for information: easier to manipulate than ever?

“There are only two industries that call their customers “users”: illegal drugs and software.”

–         Edward Tufte in The Social Dilemma (2020)

On the demand side of the information market, to complete the disaster, there are reasons to think that the individual information consumer is fighting a losing battle. After all, if an article doesn’t have to be true, but just needs to draw attention, the writer is at liberty to hijack your brain by playing on human biases without worrying about the truthfulness of her words.

So it’s no coincidence that viral messages often share the same characteristics: they offer a gripping story, have a powerful visual component and elicit an emotional response.

To give one example, articles that provoke anger and indignation diffuse significantly farther, faster, deeper, and more broadly than other content on social media. An analysis of 7,000 New York Times articles found that the articles that “triggered” readers the most – and nothing triggers like anger – brought in the most revenue. Anger, secondly, makes people more prone to believe misinformation. So in which direction do YouTube’s algorithms – designed to optimize your viewing time and sharing behavior – push you? Not, of course, that of balance. The platform serves you increasingly extreme videos. Outrage as a business model via driving the spread of misinformation by algorithms designed to generate as much money as possible for platforms and their advertisers.

Also, through algorithmic personalization, websites can theoretically provide you with exactly the articles that make you click. Unlike offline news, online news allowing everyone to see different search results and different front pages designed precisely to hijack your brain.

In short, as Bas Heijne writes in an essay with the telling title We are easier to manipulate than ever: “New technology allows you to penetrate someone’s head faster and deeper – and new knowledge about how our brain works enables you to better understand the human mind and manipulate a person.”

Indeed, diagnoses in which websites pump out loads of misinformation and clickbait that we, manipulated or just naive, fall for, are not uncommon in the post-truth literature. It is, for instance, implicit in the much-heard narrative where misinformation was responsible for Donald Trumps and Brexit. For example, The Independent claimed that Fake News Handed Brexiteers the Referendum and the Washington Post declared that Fake News Might Have Won Donald Trump The Election.

The supposed causal link between fake news and voting behavior can of course only be there if we actually believe fake news and then base our votes on it. So this narrative portrays the majority of voters as passive, gullible and easily manipulated beings.

Part IV.

Where is that misinformation?

“Such arguments persuaded only while the institutions held a monopoly on the means of information and communication: in other words, only so long as they went unquestioned. Today, of course, the public always questions, and will usually find the answer in the information sphere.”

–         Martin Gurri, The Revolt of The Public and the Crisis of Authority in the New Millennium (2018)

That concludes my exposition of why post-truth claims are somewhat plausible. I will now show that these analyses about both the supply of and demand for information suffer from inadequacies.

To start, take that statistic about how sensational, angry, and untrue content is doing so ‘well’, since (some of those) pieces diffuse so fast and so far on social media. Seems shocking, but it’s really only half the picture. While (some) individual fake news articles may be able to get further than (some) individual true pieces, at the same time, misinformation only constitutes only a miniscule part of the information landscape. For example, only 6% of the average American’s news diet consists of articles from unreliable websites, and fake news articles make up less than 1% of the content they see online. Similarly, an analysis of the news circulating on Twitter in Europe in April 2019 found that only 4% was misinformation. And so on.

How can we reconcile this with the facts about the faster spread of those fake news stories? Well, individual ‘best performing’ bits of disinformation may outperform the ‘best performing’ truths. But there just aren’t that many of those articles at the scale of the information ecosystem. So while those few links are pretty fast swimmers, they hardly ever reach people at the end of the day because they are crowded oud by the vastly greater amount of proper content.

The prediction about the deterioration of the supply side of the information market is therefore –  at least for the time being – falsified. That’s an argument against the outlined theory that makes that predication about the dilution of our informational landscape.

Where is that clickbait?

“I understand how market forces work in the news, but journalists have always been the people pushing back against them.”

–         Quote from American T.V. series The Newsroom

All in all, then, disinformation is only a very small part of the information landscape. Even though the ‘perverse incentives’ for media companies seem to point in the other direction. One explanation for that, which runs against the theory of how digitization of the information landscape leads to its pollution, is that in practice media companies do not seem to focus primarily on eyeballs.

Take this paper by Ananya Sen and Pinar Yildrim, who (at an English-language newspaper in India) investigated how the number of clicks online news stories get, independent of story quality, affects how long web editors feature the story on the frontpage. Sen and Yildrim distinguished between “soft stories” (entertainment, gossip and sports) and “hard stories” (politics, economics and world events). As it transpires, soft stories reel in 33% more clicks per link. In line with the post-truth narrative, providing additional coverage to low-quality stories which receive a higher number of clicks implies that more popular stories might crowd out high-quality stories which do not receive similar amount of reader attention.

However, the authors found that clicks have a positive and statistically significant impact on coverage of hard news only, not on the coverage of soft news stories. That is, additional coverage is only awarded to popular hard stories but not to the soft ones. As a result, hard news crowds out soft, but not the other way around. The editors chose to keep the relative number of hard stories constant, and not allow soft stories that attracted a lot of eyeballs to steal web space from more serious content.

As Sen and Yildrim conclude:

“We find that editors allocate a larger amount of resources to hard news stories even if soft news receive a higher number of clicks, [so] the concerns around clicks-based editorial decisions might be misplaced. [While] it is debated in popular media that informativeness of news content is declining, replaced with less informative, less newsworthy, more speculative entertainment content aimed at grabbing eyeballs for advertising revenue, [our] evidence seems to indicate a clear and conscious editorial strategy such that hard news dominates the amount of total news.”

In support of this, and contrary to worries about dying facts, surveys show that around 90% of citizens (still) find it very important that a media company is accurate and truthful. For about 85%, accuracy is a critical reason to trust a news source. Assertions to the extent that the media’s current online revenue model means that their incentive to invest in reputation via quality disappears, consequently, seem too short-sighted.

The fact that we value accuracy in information sources, I think, indicates why the fears that media will only produce more clickbait, sensationalism and fake news because such stories individually generate more engagement and ad revenue are exaggerated. As soon as a medium delivers too much of that kind of content, its reputation decreases and people stop consuming and sharing its articles.

You might object: but won’t media companies lose the fight for the algorithm’s favor if they do this? Contrary to the presumptions about the almighty power of algorithms in determining which information we consume, there are reasons to think that the relevance of search engine and timeline rankings for what news we read is grossly overestimated. For news and other political-societal articles, their search ranking in fact hardly determines we read the article or not. Only a few percent of searches on Google are about news-like matters, and just over 10% of the population considers social media to be their main news source. It seems that most of our indirect information still comes to us via the same traditional sources as before, new gatekeepers or not.

Let’s take stock. Although the theory that media channels should attract attention with clickbait, juicy articles and made-up but outraging pieces makes some sense in theory (because the revenue model of information-providing companies has changed significantly due to digitization), in actual fact we do not see that misinformation and sensationalism are taking over the world. Even though post-truth claims makes sense, the evidence that it’s true is lacking.

Where is that successful manipulation?

“It is so difficult to draw a clear line of separation between the abuse and the wholesome use of the press, that as yet we have found it better to trust the public judgment, rather than the magistrate, with the discrimination between truth and falsehood. And hitherto the public judgment has performed that office with wonderful correctness.”

–         Thomas Jefferson (1803)

So much for the supply of information. There remains the point – featuring heavily in certain accounts of Brexit and Trump – about how misinformation creates all kinds of suboptimal outcomes on the demand side of the information market, because consumers make suboptimal political choices as a consequence of being stupid and believing fake news.

Has misinformation really affected the voting behavior of so many folks? Needless to say, we need to believe fake news for it to be able to do this. We have already seen that fake news only takes up a minuscule proportion of the online communications. Which means, according to the calculations of two Stanford economists, that a single fake article would need to have had the same persuasive effect as 36 television campaign ads. How realistic is that?

Remember that post-truth commentators, at this point, claim that we are easy to manipulate or gullible, and that such high effectiveness may well be achievable. That claim, however, does not seem to be based on facts. Because hardly anyone believes fake news.

In experiments, usually only about 5% of subjects rate a false headline as true (even when it’s politically congenial). Furthermore, legitimate headlines are typically given an accuracy rating twice as high as misleading ones. Outside the lab, moreover, time and time again, it turns out that fake news activity is limited to a very small group. For example, on Twitter, 0.1% of users are responsible for spreading 80% of the disinformation. Researchers even estimate that folks are so reliable in spotting fake news that they propose to scale up fact-checking by using “the wisdom of the crowds” alongside professional fact-checking organizations.

The fact that hardly anyone believes and shares fake news makes it highly unlikely that disinformation has played a role in recent elections – in any country.

It also means that, apparently, the changes in the information landscape do not, in other words, have the negative consequences for our epistemological health that are so often feared. Because gatekeepers hardly control access to the information market anymore, the evaluating the reliability and credibility of information may indeed have become more of an individual rather than institutional task. For those with a dim view of our intellectual skills, that sounds like a problem. In point of fact, however, this does not appear to be a cause for concern. “No systemic evidence exists,” political scientists Brendan Nyhan writes, “to demonstrate that the prevalence of misperceptions today (while worrisome) is worse than in the past.”

Digitization is causing real changes in the information landscape and the gatekeepers of knowledge. But claims that this will have disastrous consequences because misinformation flies rampant, while at the same time most of are no longer interested in the truth or can’t discern it, are unwarranted.


Share

Geef een reactie