[note: this is an expanded version of an article that originally appeared in the Durango Herald on 12/21/20]
We face a fake news epidemic the likes of which we haven't seen before. This fact has been recognized by partisans of all stripes. For years, American citizens have reported that they are more concerned about fake news than other political issues like terrorism, immigration, or racism, and this concern is now shared by most citizens from Brazil to Japan. What can we do about it?
The standard suggestion is to get people better information. The idea is that if we can make it easier for people to sort out the difference between what's true and what's false, then people will be more likely to believe and share what's true rather than what's false. In other words, the standard solution is that improving people's ACCESS to the truth will stem the tide of misinformation. If websites, social media feeds, and TV stations could be sorted into piles of reliable information and unreliable information, we could sit back and trust intelligent adults to arrive at well-evidenced conclusions.
But how can we reliably sort the true from the false? One idea is to simply label stories as such. That idea much of the action that social media companies have taken since the 2016 election. Twitter slaps 'Disputed' tags on Twitter feeds, the Washington Post adds 'Manipulated Media' tags to videos, and many nonprofits are now publishing clear rankings of news sources by quality. In each case, the goal is to make it easy for consumers of information to tell at a glance whether a story or video or photo is the real McCoy.
And yet it makes almost no difference. In fact, it often makes the situation worse. That sounds crazy, but it's true. In a recent study reported by the Washington Post, seeing a Tweet by the president claiming that mail-in votes are more likely to be fraudulent made the average Democrat slightly less likely to agree with the claim and the average Republican slightly more likely to agree with it. In other words, the Tweet alone didn't make a sizeable difference in our personal commitment. When a Republican read the president's tweet, he was slightly more inclined to agree with it, and when a Democrat read the president's tweet, he was slightly less inclined to agree with it.
Now, it's quite clear that mail-in votes are NOT more likely to be fraudulent than in-person votes. The evidence for this is overwhelming. So there's good reason to think that this is a false claim. So let's see what happens when Big Brother Twitter flags this deceptive claim.
In the same study, when participants read the exact same tweet but with the 'Disputed' tag added by Twitter, something remarkable happened. Once flagged as deceptive, Democrat readers became far more likely to reject the claim at the same time as Republican readers became far more likely to accept it. In other words, fact-checking political speech in this case made it MORE likely that the audience as a whole endorses the misinformation. That's a clear repudiation of the standard solution. In this case, fact-checking the political speech of others INCREASES the odds that readers believe and share the misinformation.
Why?
The simple answer is that the value of true belief can be swamped by other values. We value believing truly, yes, but we also value other things in life. And when we can't have both, sometimes we sacrifice the truth.
That's why the standard solution is naive. Simply giving people the tools to find the truth is often ineffective. If I don't want to dig a hole, giving me a shovel won't make me any more likely to dig one. If I don't want the truth about mail-in ballots, giving me tools to find the truth won't make me any more likely to do so. We've assumed that everyone wants the truth more than they want other political ends, and so we've assumed that giving them the tools is the way to help them out. Our best evidence shows that this is mistaken.
If we have other ends besides truth, what are they? I catalog several in chapter 3 of my book, but let me add one here: group membership and signals. We have powerful incentives to identify with a particular group. Humans dominate the globe because of our intelligence and ability to cooperate. As Hobbes famously pointed out, without cooperation, our lives would be solitary, poor, nasty, brutish, and short. And we're more likely to find cooperation and reciprocation within our group.
But humans look alike, and so we need a way to reliably tell who belongs to what group. That's where signals come in. We rely on signals from other people to tell which tribe they belong to. Wearing a Black Lives Matter T-shirt sends one sort of signal. Wearing a Blue Lives Matter T-shirt sends another. We use those signals to determine whether other people are part of our political tribe. And we, in turn, send signals so that others may identify us as well.
So far, so good. What does this have to do with fake news? Everything. Signals don't work unless they are reliable. If everyone is sending the same signal, then it won't reliably sort humans into in-groups and out-groups. Given that, political beliefs function as reliable signals only when they are held by one group and not another. Believing that the Earth is a sphere isn't a reliable political signal: all political partisans agree that the Earth is a sphere.
That means a belief won't work as a reliable signal unless its endorsed by one and only one political faction. The catch is to find beliefs that are consistent enough with our shared evidence and yet controversial enough to divide political partisans. Political beliefs are often of this sort. Consider climate change. Any belief you hold about the existence and cause of climate change will be consistent with your everyday experiences of the weather. And yet Republicans and Democrats are deeply divided on this issue and a belief about climate change is a fairly reliable indicator of whether you're on the right or the left (at least in our country). That's why beliefs about climate change are reliable signals of group membership.
This group and signals explanation predicts the backfire effect of the Twitter corrections perfectly. A mere Tweet from the president doesn't sway many people on either side of the spectrum. But a Twitter label does. That's because the label makes the content of the Tweet a good candidate for a reliable signal. A certain sort of Republican might read the Twitter correction and think to herself "If those left-leaning admins at Twitter don't like this, then my endorsement of it is a sure sign that I'm a Republican." She's right. Only a true believer would endorse it after it has been labeled as deceptive. And so the label increases the confidence of (a certain sort of) Republican viewers. Similar cases can be found on the progressive side of the spectrum.
There is interesting--albeit provisional--evidence that this sort of tendency is hard-wired for humans. It's plausible that human minds naturally gravitate towards misinformation and the spread of misinformation when doing so increases in-group cooperation or confuses outsiders. In short, our ancestors who abandoned truth to save the tribe out-competed hominids who held to the truth even when it risked social collapse. And so the group and signals explanation is bolstered by research in evolutionary psychology. That means we can't blame Facebook for the fake news epidemic.
The takeaway lesson is twofold. First, the standard solution for stemming the fake news epidemic is wrong-headed. In some cases, giving people better information or tools will make a difference (see chapter 8 for some examples), but in many cases it will not. Giving people the tools to sort the true from the false is not a panacea. Second, people often value other things (like signaling their tribal membership) over truth. In those cases, we should not be surprised when people act and believe in ways that are at odds with the truth.
Comments