Not everyone may know about Section 230, but it plays a huge role in shaping the online world. It’s a short law; the most important part is just twenty-six words long. And it was enacted in 1996 because Congress was worried that the internet would become a cesspool of harmful content if internet companies could be sued for anything their users posted.
In 1995, a New York state court had held that Prodigy, an early internet company, could be sued for defamation for something that one of Prodigy’s users had posted. Because Prodigy moderated its users’ posts, the court reasoned, the company had made itself liable for posts that its users wrote whenever Prodigy failed to remove them.
To eliminate this court-created disincentive to moderate content, Section 230 states that an “interactive computer service” (think Facebook, Google, or Twitter) can’t be held liable for publishing other people’s content. Similarly, a service can’t be held liable for its decisions on what content to remove, leaving companies free to moderate as they see fit.
Thanks to Section 230, websites can afford to offer forums for freewheeling speech. Want to blow the whistle on misconduct? Criticize someone powerful? Complain about mistreatment? Engage in vigorous, even rude debate? Section 230 lets you do it, because the companies that publish what you have to say don’t need to fear getting dragged into a lawsuit. It’s no exaggeration to say that the open, creative, disruptive internet we know today wouldn’t exist without Section 230.
In recent years, tech companies have developed increasingly sophisticated ways of connecting people with user content. Today, companies both big and small rely on algorithms to achieve this goal, helping users find speech relevant to their interests. But these algorithms, which are integral to how modern users experience the internet, are now under attack at the Supreme Court. A group of victims of terrorism have sued Google, the parent company of YouTube, alleging that YouTube’s algorithms aided terrorist recruitment by helping would-be terrorists find radicalizing videos. They argue that YouTube’s video “recommendations” are distinct from publishing and thus unprotected by Section 230. This argument was rejected at both the district court and Ninth Circuit, but now the Supreme Court is considering their claim.
Cato, joined by the R Street Institute and Americans for Tax Reform, has filed an amicus brief urging the Supreme Court to affirm that these algorithms are protected by Section 230. In the brief, we argue that the lower courts have developed an accurate test, grounded in the text of Section 230, to figure out whether the actions of an interactive computer service are covered by Section 230. We then show that YouTube’s content-recommendation algorithms meet that test in full. Finally, we explain that a rival test urged by the plaintiffs is unsupported by the text of the statute and not even backed up by the cases they cite in support. The Court should reject a theory that would atextually limit Section 230 and make the internet less open, less free, and less dynamic.