top of page
  • Writer's pictureJustin McBrayer

Social Media Censorship Fallout


Last fall, I noted that Facebook took the unprecedented step of censoring certain sorts of content on their site. Posts making false claims about the Holocaust or the pandemic can now be removed by administrators. This was a significant departure from past censorship policies. It's one thing to police terms or concepts (like hate speech). It's quite another thing to police ideas or positions.


For example, consider how hard it is to censor anti-semitism on a social media site. It's easy to run an algorithm that will search and destroy terms like 'kike'. It's not easy at all to run an algorithm looking for false claims about the Holocaust or the state of Israel.


At the time, I flagged two problems with this strategy of policing content. First, it requires fallible humans to make calls they shouldn't about what's true and what's false. Second, it doesn't work. Censoring ideas creates a false sense of security since it just squishes the suppressed content elsewhere. It doesn't really disappear.


In the months since, it's become apparent how serious each problem really is.


What is True?

Start with the first problem: the censorship of ideas and claims requires fallible people to make calls that are above their epistemic paygrade. There's the obvious point that Facebook censors in 1,400 AD would have gotten things fabulously wrong on any number of fronts: the earth isn't the center of the universe, men are not intellectually more capable than women, and slavery is deeply immoral. Expressions of those ideas would have been heretical, offensive, and deemed likely to upset the social order of the day.


But the problem runs much deeper than these obvious examples can make clear. Let's just restrict ourselves to the topic of COVID vaccines. As anyone knows, there's a great deal of misinformation about vaccines online. The issue has been wrested from the experts and politicized by political partisans. And social media plays a prominent role in boosting the campaigns of anti-vaccine campaigners and those who would twist the facts to fit their political agendas.


Facebook knows this and conducted an internal study on content that leads to"vaccine hesitancy" on the part of readers. The study revealed a clear problem: many of the ideas that lead to vaccine hesitancy aren't false.


Facebook has censored false claims about the coronavirus or vaccines since at least December. So, claims that violate these rules are actively being removed. And yet there's a lot of content that can't be censored under current rules but still contributes to vaccine hesitancy on the part of readers.


Just imagine that you're a censor for a social media company, and you are charged with deleting misinformation about COVID. What would you do with each of the following comments?


"My next door neighbor told me that the vaccine causes cancer."

"Some people have died from the vaccine."

"I had a dream that the anti-Christ is using the vaccine to get control over people."

"The coronavirus might have orignated in a Chinese lab."

"I believe that the pandemic is a government hoax."


I hope that list looks difficult. Each claim is either true (like the possibility--however unlikely--that the disease originated in a Chinese lab) or unverifiable for a censor (how do YOU know what he dreamed about or what his neighbor said?). What are companies going to do about content like this? It's a dilemma.


On the one hand, they can leave these posts alone. But, as the Facebook study shows, the current censorship rules on false information will barely dent online deception. In the case of COVID, there will be plenty of true statements like these that give rise to vaccine hesitancy. The same will go for any other disputed issue, from the Holocaust to election results.


On the other hand, social media companies can censor these posts, too. But now we've moved into a brave new world: censors aren't just targeting claims that are false. They are also targeting claims that are true but have certain unpleasant consequences. No brother is big enough to make that call for the rest of us.


Why Doesn't Censorship Work?

Now think about the second problem: censoring ideas doesn't make them go away. It pushes them underground. When Facebook or some other social media site prohibits members from expressing certain ideas or claims, what happens next? This much is obvious: they don't stop believing. Prohibiting someone from saying something has no bearing whatsoever on whether he continues to think it. In the social media domain, we don't have thought police (yet). We only have speech police.


So, if you continue to believe something that you think is important, and you can't express this idea or claim in your current app, what do you do? Answer: you get another app. And that's exactly what's happening. People who have views that are considered false, offensive, or otherwise unsavory are being censored on mainstream social applications. And so they are migrating to less-popular social applications where they don't face the same pressures.


Just in the political realm, conservatives have steadily abandoned traditional social media apps like Twitter for alternatives like Parler that don't operate with as heavy a hand when it comes to censoring ideas. It's like squeezing a balloon: censorship in one place just makes the ideas pop up somehwere else.


You might think this is a good deal. As long as crazy content isn't on the mainstream sites, that's an improvement. I disagree.


One problem is that when the content is happening elsewhere, the rest of us aren't there to combat it. When someone posts something crazy on Twitter, the whole world is there to see it and call it out. But if someone posts something crazy in an ideologically segregated space, the rest of us will remain blissfully unaware. That allows conspiracy theories and misinformation to fester longer than they would have otherwise.


A further problem is that idea censorship can push the expression of ideas to places we CAN'T get, even if we wanted to. For example, law enforcement agents monitor social media for signs of impending terrorist attacks, unstable individuals who might have access to weapons, etc. When those people leave social media and move to encrypted online platforms that don't censor ideas, this job gets that much more difficult. That's how we end up with recent headlines like these:


In many ways, what's happening with social media nowadays seems like a replay of what happened to the news over the last 30 years. For years, conservatives complained about a liberal bias in mainstream news sources. Nothing happened. Conservative voices and perspectives were consistently shut out by many of the leading news providers of the day. So, what did they do? Stop being conservative? Hardly.


They went elsewhere. Conservative talk radio and TV stations like Fox News were an obvious answer to a kind of limited informational environment controlled by interests who were not interested in conservative ideology. That's part of the reason we ended up with a fragmented news ecosystem where liberals and conservatives watch totally different channels and listen to totally different radio stations.


Social media censorship of ideas threatens to leave the social media landscape just as fragmented. That's not good either for truth or democracy.







52 views0 comments

Recent Posts

See All
bottom of page