In the 25 years since its passage, this prescient rule has paid tremendous dividends. Americans are served by a dizzying array of publishing intermediaries, allowing us to communicate in real time via text, audio and video. We have created forums for work, worship, play and romance, serving every imaginable nice interest and minority. Of course, not all interconnection has been positive. Extremists and criminals use the internet too. Some argue that amending or repealing Section 230 would compel platforms to suppress extremist speech and criminal activity.
However, exposing platforms to broad liability for user speech would lead to the removal of much more than dangerous speech.
Platforms already make extensive use of their ability to remove unwanted speech, filtering spam, threats, advertisements for illegal goods, foreign propaganda and even simply off-topic speech. Popular platforms review millions of posts a day, often with the assistance of imperfect software. At this scale, some innocent speech will inevitably be misunderstood, mislabeled and removed. Over the past few years, major platforms’ rules have become more stringent and expansive, prompting concerns about censorship and bias.
Demanding that platforms assume liability for their users’ speech will at best exacerbate the accidental removal of innocent speech. However, it also runs the risk of limiting who can speak online at all. Digital intermediaries usually review speech after publication. Speech may be flagged, either by other users, human moderators, or algorithms, and placed in queue for adjudication. Section 230 allows platforms to remain open by default and worry about excluding misuse when it occurs, giving a voice to everyone with an internet connection.
In contrast, newspapers and other traditional publishers filter, edit and modify submissions before publication. While this allows them to safely assume full ownership of the speech they publish, it dramatically limits who can speak. Editing is a laborious and time-consuming process. Even if a newspaper wanted to publish every letter to the editor, it would have neither the space nor the time to do so. This model often produces consistently high-quality speech, but tends to favor some perspectives over others, offering only a narrow slice of elite sentiment.
Repealing Section 230 would make social media more like traditional media by making it exclusive. With limited resources to review speech before publication, platforms would have to determine whose perspectives should be prioritized. There is little reason to think their selections would differ greatly from newspapers. If replies and responses had to be reviewed as well, social media would lose most of its interactivity, becoming another conduit through which speech is passively received.
Without Section 230, platform moderators would not become more deliberate, they would simply remove more. The threat of costly litigation does little to inspire thoughtful decision making — moderators will act quickly to eliminate any source of legal risk. When Congress amended Section 230 in 2017 to expose platforms to liability for speech promoting prostitution or sex trafficking, Craigslist did not moderate its personal advertisements page more cautiously, it shut the page down.