As COVID-19 spread, Silicon Valley giants amended their content-moderation policies. In March, Google and Alphabet CEO Sundar Pichai published an announcement outlining COVID-19 updates. Pichai noted that YouTube, which Google owns, had removed thousands of misleading COVID-19 videos.
The same month, Twitter broadened its definition of harm in an attempt to combat content that “goes directly against guidance from authoritative sources of global and local public health information.” Twitter’s policy change targets misleading health information rather than the organization of protests, and while users have taken to Twitter to organize such gatherings, it is not best known as a venue for event organization.
Facebook, however, is well known as a platform for organizing events. Many Americans who object to stay-at-home orders have used Facebook to organize protests. Facebook has removed event listings promoting gatherings understood to violate state-mandated social-distancing guidelines. In effect, Facebook has eased the enforcement of stay-at-home orders by denying its event-organization software to anti-lockdown protestors.
These changes in content-moderation policies have prompted commentary from conservatives that ranges from the misinformed to the outright histrionic.
Among the more bizarre comments are those from Breitbart editor Alex Marlow, who called Facebook’s decision with respect to moderating COVID-19 content “Orwellian,” proving once again that Orwell can always be used as a cudgel in any political debate by those who don’t understand Nineteen Eighty-Four (or anything else George Orwell wrote).
Conservative commentator Dan Eberhart also took to Twitter to complain about Facebook, noting, “We have a constitutional right to assembly, and that right doesn’t go away regardless of whether it is actually safe to do so or not.”
Senator Joshua Hawley (R., Mo.), responding to news that Facebook had removed content associated with anti-lockdown protests in California, New Jersey, and Nebraska, asked, “Because free speech is now illegal in America?”
Fox News commentator Tucker Carlson has also been critical of content-moderation policies relating to COVID-19, describing as a ban on dissent YouTube’s decision to remove footage of two doctors questioning the severity of COVID-19 and California’s response to the pandemic.
These comments all betray a misunderstanding of how free speech is supposed to work in a liberal democracy.
A functional freedom of speech requires freedom of association. The freedom of National Review to decline an op-ed submission is as important a freedom as the freedom of the author to write the op-ed. Similarly, Facebook’s leadership are free to disassociate from content they find unconducive to their business or irresponsible to host, whether it is pornography, beheading videos, or COVID-19 misinformation.
For Facebook to make such decisions is not “Orwellian” in the slightest, nor does it impact freedom of speech. Those who wish to spread the content that Facebook bans are still free to do so. Contrary to what Senator Hawley and Mr. Eberhart suggest, Facebook’s content-moderation policies do not harm the freedom of assembly or freedom of speech. The Internet is much more than Silicon Valley’s giants. Facebook and Twitter might be household names, but there are other websites that allow for content sharing. BitChute is a video-sharing site that emerged in response to YouTube’s content-moderation decisions, and the social-media website Gab allows for much of the content banned by Facebook and Twitter. Even if these services didn’t exist, your right to speak would not impose any duties on firms to host your speech.
Nor do content-moderation policies reveal a need to amend Section 230. In fact, these debates highlight confusions about Section 230 and illustrate that proposals for reform would curtail online speech.
These confusions have prompted misguided policy proposals. Recently, Deroy Murdock wrote an article for National Review Online stating that Big Tech companies had to make a choice between being public fora or private entities similar to newspapers. These companies must make no such choice, despite what Murdock thinks. Murdock went on to argue that Big Tech companies shouldn’t be allowed to engage in content moderation while also enjoying Section 230 protection.
Murdock’s policy proposal reveals a misunderstanding of why Section 230 was necessary. He would like Section 230 protections to be contingent on websites allowing “material of every hue.” Under such a regime, websites would be plunged into a dilemma: either allow “material of every hue” (pornography, footage of animals being crushed to death, racist rhetoric, etc.) and enjoy liability protection, or screen every comment or photo before it goes live in order to avoid lawsuits. Allowing “material of every hue” results in an Internet that the vast majority of users don’t want; screening every piece of content would kill online speech as we know it. Ironically, Murdock’s proposal would result in far less online speech.
We’ve been here before. In fact, this dilemma is what spurred Section 230’s creation in the first place. Congress passed Section 230 in 1996 as a way to solve this so-called “moderator’s dilemma.”
In 1991, a federal judge held that the Internet-service provider CompuServe, which did not engage in moderation of the alleged defamatory content at issue, was “the functional equivalent of a more traditional news vendor.” As such, it could not be held liable for user-generated content. In 1995, a New York Supreme Court judge held that, because an Internet-service provider did moderate content, it was the publisher of a user’s content. The dilemma for the burgeoning Internet industry was clear: engage in no content moderation and be considered a distributor, or moderate content and be considered a publisher. Section 230 solved the dilemma by stating that interactive computer services are not the publishers of user-generated content and are free to moderate content in whatever way they desire.
Despite what many on the political right seem to think, Section 230 does not distinguish between publishers and platforms, does not say anything about so-called “public forums,” and does not hold that content moderation makes a website such as Facebook the publisher of third-party content.
Facebook, Twitter, YouTube, and many others made the prudent business decision not to use the First Amendment as their content-moderation guideline. With billions of people using these companies’ products, it shouldn’t be surprising that their content-moderation policies will sometimes irritate and anger some of their users. But such irritation should not prompt proposals that are contrary to the principles of free speech and risk destroying the Internet of today that — while far from perfect — remains the best venue for speech in history.