Section 230 of the Communications Decency Act has played a pivotal role in fostering the internet ecosystem we have today. Although the law applies to millions of websites of all sizes, critics often misconstrue it as a special exemption for “big tech” companies, shielding them from legal scrutiny; however, platforms large and small are liable for all content they create or develop, even if only in part. Yet many lawmakers see Section 230 as a stumbling block impeding fairness and accountability online. Their arguments fail to consider the expansive impact that Section 230 has had in fostering and preserving a competitive online marketplace over the past 25 years. Its protections for both platforms and users have proven essential to increasing competition.

As it is written, the law provides a liability protection that acts as a safeguard for innovation and increases consumer choice and competition. Changing or abolishing Section 230 would fundamentally alter how platforms host user‐​generated content, which will negatively affect new and small companies that lack the resources of large incumbents. Ultimately, repealing or weakening Section 230 will not affect tech giants as much as their smaller competitors, who will be burdened by the challenges associated with regulatory hurdles, compliance costs, and the return of the moderator’s dilemma.

Introduction

Over the past two years, both the political left and right have been increasingly critical of Section 230, a law that enshrines liability protection for online services carrying user‐​generated content and content moderation decisions. While the criticisms from left and right differ, critics across the political spectrum have argued that the liability protection granted by Section 230 has created a special privilege for large online players—in particular, prominent social media firms such as Facebook, YouTube, and Twitter. Yet Section 230 is even more critical to the viability of newer entrants or smaller platforms hosting user‐​generated content. The benefits of Section 230’s protections extend well beyond the context of social media, affecting a wide array of online services including any site that features user review or comment sections.

Politicians, pundits, and academics concerned with the overall impact of technology and technology companies on American life have mistakenly targeted their ire at Section 230. Some of these critics have argued that big tech has grown too big and thus requires further government intervention in the form of various regulations. While these claims, if true, might reasonably be offered as justification for heightened antitrust enforcement or policy changes to existing antitrust laws, they are illogical predicates for weakening Section 230, which is pro‐​competitive. It is not Section 230, but user preferences and network effects that have given Facebook and Twitter prominent roles in the marketplace. There is nothing in law that makes them gatekeepers to a digital public square. Their dominance could end with little or no notice whenever a new entrant offering more pleasing services attracts critical mass. And it is the legal certainty for new entrants provided by Section 230 that makes it possible for new competitors to enter the market and attract investment. Section 230 also guarantees online platforms the discretion they need to adopt innovative and disruptive business models, allowing for an ever‐​growing variety of content moderation policies and audience specific options.

Some lawmakers, while recognizing that changing or revoking Section 230 will not solve the problems about which they are most concerned—competition with big tech, corporate invasions of consumer privacy, teen addiction to social media, online crime, and editorial policies with which they disagree—nonetheless see amending or repealing the law as a way to punish big tech. But doing away with Section 230 would not even have that effect. Eliminating its protections for competition would enhance the market power of today’s largest incumbents. It would deter new competitors from entering the market, further concentrating revenue and users among a few large firms. At the same time, it would almost certainly cause large incumbent firms to be more cautious in their content moderation policies. Without Section 230’s liability protection, the new default response to controversial user speech would be to take it down. For these reasons, the fact that Section 230 benefits smaller platforms cannot be used as an argument for using antitrust enforcement against big tech as a means of addressing concerns over too‐​stringent content moderation policies or insufficient opportunities for free speech.

For all the conversations around big tech, policymakers should not forget the critical role of Section 230 in protecting competition and free speech.

What Is Section 230?

Section 230 is the federal law that protects online services hosting user‐​generated content from liability for that user content, provided they are not involved in any part of its creation or development.1 It also allows platforms to engage in content moderation decisions regarding what content to allow on their websites without fear that engaging in such activity may open them up to liability.

Section 230’s original purpose is often misunderstood or misconstrued. Similarly, many of the complaints about Section 230 do not reflect its application to today’s marketplace. The law began as a bipartisan bill in the House of Representatives called the Internet Freedom and Family Empowerment Act. Its coauthors were Republican Chris Cox and Democrat Ron Wyden. The bill had two goals. First, it established that no interactive computer service, such as a website, would be treated as the publisher of user content on its platform in which it had no part in creating or developing. Second, it resolved concerns stemming from a court ruling that held that a platform could be held liable for third‐​party content if it engaged in content moderation by clarifying that such services should not be considered as publishers and could engage in content moderation.2 The bill clarified that services acting in good faith as good Samaritans would not become liable for moderating offensive content. The bill became part of the Telecommunications Act of 1996, which then president Bill Clinton signed into law in February of that year, and it is now part of the Communications Act of 1934, as amended.

While the Supreme Court would strike down much of Communications Decency Act, which had sought to restrict online speech, as violating the First Amendment, Section 230 survived this challenge.

Among the several myths that have emerged about the intended purposes of Section 230 is that it was only intended to protect a then infant internet industry or that it required platforms to maintain political neutrality in their content moderation decisions.3 The law’s coauthors have made clear on multiple occasions that this is counterfactual.4 As former representative Cox has written,

Far from wishing to offer protection to an infant industry, our legislative aim was to recognize the sheer implausibility of requiring each website to monitor all of the user‐​created content that crossed its portal each day. In the 1990s, when internet traffic was measured in the tens of millions, this problem was already apparent. Today, in the third decade of the 21st century, the enormous growth in the volume of traffic on websites has made the potential consequences of publisher liability far graver. Section 230 is needed for this purpose now, more than ever.5

Section 230 was not designed to protect an infant domestic industry but rather to provide a legal framework consistent with an ever‐​expanding internet and an ever‐​widening array of voices.

Similarly, the authors of Section 230 have been clear that the legislation’s protections were never contingent on websites maintaining political or viewpoint neutrality in their content moderation decisions. As Cox testified when speaking about the law’s intent before the Senate Commerce Committee in 2020: “Section 230 does not require political neutrality, and was never intended to do so. Were it otherwise, to use an obvious example, neither the Democratic National Committee nor the Republican National Committee websites would pass a political neutrality test. Government‐​compelled speech is not the way to ensure diverse viewpoints. Permitting websites to choose their own viewpoints is.6 His coauthor, former representative and now Sen. Ron Wyden, explains it this way: “Section 230 is not about neutrality. Period. Full stop. [Section] 230 is all about letting private companies make their own decisions to leave up some content and take other content down. You can have a liberal platform; you can have conservative platforms. And the way this is going to come about is not through government but through the marketplace, citizens making choices, people choosing to invest. This is not about neutrality.”7

The criticisms and misunderstandings of Section 230 come from both sides of the aisle. Critics on the left allege that Section 230 discourages platforms from removing harmful content or that it provides an unfair advantage to tech companies over traditional media. On the right, critics allege that online content moderation practices suppress conservative speech instead of applying neutral rules of the road. Critics on both sides even allege that Section 230 is a special privilege for technology companies that has artificially supported such companies’ growth and therefore reform is necessary to rein in this government‐​granted privilege.8 All of these criticisms are founded on the mistaken premise that Section 230 is only about large social media. In truth, it applies to every one of the more than 200 million websites that host user‐​created content. It is of critical importance to small players, who not only provide alternative social media platforms, but who also provide many other services, such as how‐​to videos; educational resources; product and service reviews; comment sections; restaurant recommendations; film, television, and book reviews; and online marketplaces for independent sellers.

Competition and Variety: The Benefits of Section 230 Beyond Big Tech

Far from being a loophole designed for the special benefit of big tech, Section 230 applies across the ecosystem of the internet to websites large and small. It plays an especially critical role in allowing new platforms to come to market and in protecting the viability of platforms with more‐​targeted audiences to emerge and compete. The authors of Section 230 intended the law to allow platforms to apply to websites that serve specific, as well as general, audiences through the medium of a robust marketplace providing the broadest possible range of choices. As Wyden explained, “It’s in the country’s long‐​term interest to have the most diverse, most expansive array of ideas out there.” That is the approach that Section 230 takes. The law’s authors, and the Congress that enacted it, determined that private ordering, rather than government regulation of speech, is the way to achieve this desired outcome. “Making platforms welcoming and taking down slime,” Wyden says of internet firms, “is going to be in their long‐​term interest to keep customers.” He summarizes the law’s purpose and effect thus: “We come back to, again, diverse voices, good things, private sector, marketplace.9 Section 230 allows platforms to find the content moderation solutions that best serve their consumers’ needs without fear of legal liability, and it allows internet users the greatest opportunities to create content and to consume it.

The freedom to adopt content moderation policies tailored to their specific business model, their advertisers, and their target customer base allows new platforms to please internet users who are not being served by traditional media. In some cases, the audience that a new platform seeks to serve is fairly narrowly tailored. This flexibility to tailor content moderation policies to the specific platform’s community of users, which Section 230 provides, has made it possible for websites to establish online communities for a highly diverse range of people and interests, ranging from victims of sexual assault, political conservatives, the LGBTQ+ community, and women of color to religious communities, passionate stamp collectors, researchers of orphan diseases, and a thousand other affinity groups. Changing Section 230 to require websites to accept all comers, or to limit the ability to moderate content in a way that serves specific needs, would seriously curtail platforms’ ability to serve users who might otherwise be ignored by incumbent services or traditional editors.10

Thanks to Section 230, websites can more comfortably host conversations on under‐​reported or under‐​discussed issues, such as sexual harassment at the advent of the #MeToo movement, without fear that they could find themselves liable for their users’ content.11 By allowing new voices to be heard in this way, Section 230 has facilitated huge growth in individual speech and social networks. While such a vast expansion of opportunities for speech is not without new troubles—witness complaints about so‐​called filter bubbles and cancel culture—it is undeniable that today, hundreds of millions of American users of online services have access to a more diverse range of opinions and communities than ever before.

That many people fail to appreciate this essential function of Section 230 is borne out by the fact that some lawmakers and policy analysts have proposed conditioning Section 230 protection on a form of content neutrality.12 Such proposals are typically linked to claims that a particular website demonstrates political or viewpoint bias or is perceived as being inconsistent in the enforcement of its content moderation policy. There have been many proposals of this type introduced over the course of the current and previous Congresses, and additional similar proposals have been debated at the state level.13 Inherent in this approach is federal or state government policing of online speech, with political appointees and law enforcement deciding when and whether content should be taken down. Substituting this for Section 230’s premise of private ordering would mean significantly more—and likely unconstitutional—government intrusion into private speech. In practical terms, it would be particularly devastating for marginalized communities seeking to facilitate their conversations, who would stand to lose access to smaller, specialized platforms.14

If lawmakers were to modify Section 230 to require platforms to be viewpoint neutral, platforms would not merely be limited in their ability to target a specific audience that they seek to serve. The aim of such Section 230 “reforms” is to force the platforms to host speech which they and their community of users find objectionable. This imposition would affect not only general‐​audience platforms facing claims of bias—the most prominent being Facebook and Twitter—but also special‐​interest platforms, such as those serving the LGBTQ+ community or various religious groups, which would be forced to host content they find deeply abhorrent. Since government‐​compelled speech is violative of the First Amendment, we would learn after years of litigation that these reforms are unconstitutional. But in the here and now, these illustrations make plain why Section 230 takes the approach that it does, which is permitting each website to remove content that it and its users find objectionable.

Alongside proposals to repeal or weaken Section 230 are suggestions that Congress and the Federal Communications Commission enact a new Fairness Doctrine, requiring sites that seek to host content that aligns with their faith or their politics to balance this with content advancing the opposite view. Such neutrality requirements would chill online conversations around a host of political and social issues. Out of an abundance of caution, many platforms would be more likely to take down speech that might later be deemed by government regulators to require a rebuttal for fear they might be subject to arbitrarily determined penalties.

How Section 230 Enables Increased Competition

While Section 230 plays an important role in enabling large platforms to make content moderation decisions at scale, it is perhaps even more important for smaller platforms that lack the resources of larger and more established platforms.15 By providing certainty around legal exposure and protecting platforms from open‐​ended liability for wrongs committed by others, Section 230 helps new services that are seeking to attract investors and to operate at a smaller scale. These are two essential ingredients if new entrants are to join the marketplace—they must be able to operate without fear that merely hosting user‐​generated content could expose them to potentially business‐​ending liability.

This was a very real fear before Section 230. And fear of liability for allegedly hosting illegal content remains very real in other areas of law where Section 230 does not apply. For example, the Digital Millennium Copyright Act (DMCA), which requires the take down of content that allegedly violates someone’s copyright, has significant problems with meritless claims: as many as 30 percent of these claims have been found to be questionable. Yet internet platforms must respond immediately and at least temporarily restrict the content in question in many cases.16 Even though a website might ultimately be vindicated in if it chooses not to censor such content, fighting any claim against it can easily exhausts its resources in legal fees. As Techdirt editor Mike Masnick and Cathy Gellis of the Copia Institute explain, innovative companies, such as the video hosting site Veoh, have found themselves bankrupted after defending bogus claims surrounding under the act. “History is littered with examples” they write, “of innovative new businesses being driven out of existence, their innovation and investment chilled, by litigation completely untethered from the principles underpinning copyright law.”17

Without Section 230, or if the law were significantly narrowed, such concerns could be expected to proliferate as platforms faced more litigation and the accompanying costs for content moderation decisions. Given the volume of user‐​created content hosted by even modest‐​sized platforms, newer and smaller internet services would likely face far more litigation around their content moderation decisions than they would be able to afford.18 These significant, unrecoverable costs would impair the ability of new platforms to compete. Larger platforms are more likely to be able to attract the funding and legal resources necessary to defend themselves against lawsuits over third‐​party content. As Mark Weinstein, the CEO of the messaging startup MeWe, has written about the potential for repeal or weakening of Section 230, “Ironically, this would help Facebook, Twitter, Google and other social‐​media giants while hurting smaller companies and new startups. The big boys have deep pockets. They can easily hire the massive moderation and legal teams that would be necessary to defend themselves. I can’t. Revoking Section 230 would put hundreds of startups and other smaller companies out of business.”19

A revocation or substantial change to Section 230 that increases the likelihood of litigation and exposes internet platforms to open‐​ended liability would make it especially difficult and costly for new platforms to obtain funding.20 Instead of focusing on developing a product consumers desire, with appropriate content moderation and community standards, these platforms would default to designing their business model within the constraints of legal risk management and avoidance of the costs of needless litigation.

In this way, removing or significantly modifying Section 230 would force platforms once again to face the so‐​called “moderator’s dilemma,” a situation that existed before Section 230 was enacted. Platforms would be forced to choose between two equally unattractive alternatives. They could attempt to minimize their liability by scrutinizing every user post for its potential risks (likely delaying all posts for substantial periods of time, and significantly increasing the cost of operating their site). Or they could engage in no moderation whatsoever, in which case the pre–Section 230 common law established that they would not be liable for their users’ content—but in which case, also, their users’ experience would be badly degraded.21

This is not a solution that most audiences would appreciate. While a few unmoderated platforms like the notorious 8chan exist, most users and platforms wish to avoid excessively graphic violence, pornography, online harassment, and equally offensive content. The most successful platforms featuring user‐​created content are patronized by people who want to visit for connecting with friends and family, learning a new skill, or reading helpful product and service reviews.22 Content moderation enables these platforms to respond to their users’ wants and concerns in both large and small ways. In the service of community standards, platforms can remove off‐​topic content in a specific forum, limit or ban harassing users or content, and remove spam. As Reddit cofounder Alexis Ohanian tweeted, “What [platforms] all eventually learn is users WANT moderation.”23

While the compliance costs associated with the many proposed changes to Section 230 will be most acutely felt by small businesses, this does not mean that consumers will not also bear higher costs. Since smaller internet players lack the ability to absorb steeply higher costs of capital and compliance staff, these additional costs will be passed along to consumers in the form of service fees or reduced services.24 For example, after the General Data Protection Regulation (GDPR) was implemented in Europe, which placed additional compliance burdens regarding data privacy and security on firms, investment in small ad‐​tech firms was sharply reduced and the market share of large ad‐​tech firms grew.25

On a macro scale, significant changes to Section 230 would also be certain to negatively impact U.S. economic growth. In an analysis of the existing literature, Cato’s Scott Lincicome has explored both the positive impacts of the existing legal regime and the potential negative impacts of changes to the United States’ GDP.26 The studies he surveyed have found gains in consumer welfare and GDP gains from the contribution of internet platforms under current legal rules. They have also found that eliminating Section 230’s liability protection for hosting user‐​created content and shifting to a stricter or more onerous regulatory regime would negatively impact GDP, job growth, and investment in internet startups.27

Some have argued that concerns about the impact on small platforms could be alleviated with carveouts for small businesses. For example, some of the legislative proposals to amend Section 230 reform, including the Platform Accountability and Consumer Transparency (PACT) Act introduced by Sens. Brian Schatz (D‑HI) and John Thune (R‑SD), and the Ending Support for Internet Censorship Act introduced by Sen. Josh Hawley (R‑MO), have sought to exempt smaller platforms based on the number of users or the revenue of the site. But the certain result of increasing censorship on the largest platforms and inducing them to be far more cautious in deciding what content to host will not be pleasing to those whose complaint is that there is too much online censorship. More generally, it is hardly a given that the definition of “small” will square with the authors’ objectives to maintain the pro‐​competitive effects of the current legal regime. For example, Reddit has 430 million monthly active users, but currently has only approximately 350 employees.28 Is it small or large? Which should it be? The protection afforded by Section 230 makes no such arbitrary distinctions.

The growth of user‐​generated content has enabled a wide range of smaller internet services to reach much larger communities of users than their limited resources would suggest. For example, Wikipedia provides a globally valuable information resource consisting almost entirely of user‐​generated content, but it remains a nonprofit with relatively few employees. It is the antithesis of big tech. Would its six million articles and 42 million users disqualify it from any small business carveouts? If so, it and many other online resources that defy easy categorization as big tech would be forced to incur significant costs and would thus rapidly develop a more‐​restrictive approach to their use of user‐​created content. These significant changes would likely interfere with the way they distinguish themselves in the market and with the services that they primarily provide. In many cases, platforms with comment sections, including magazines, blogs, and newspapers, will find it necessary to do away with their user‐​generated content and instead provide only preapproved content.

In sum, Section 230 provides the legal certainty that is necessary for internet platforms to host user‐​created content, while focusing their resources on developing a product appropriate to their online community and content moderation standards tailored to it. This legal certainty is highly pro‐​competitive, and it is especially critical for smaller platforms that lack the resources of big tech. Even so, without Section 230 the largest platforms will also become more restrictive of user‐​generated content, negatively affecting many more people. Overall, the result of changing Section 230 would place substantial costs on a range of platforms beyond big tech, further entrenching the incumbent giants that can better afford the additional compliance costs and the need to self‐​insure against open‐​ended liability.

Regarding Antitrust and Content Moderation

While Section 230 has pro‐​competitive effects, that does not mean competition policy and Section 230 reform should be considered interchangeable. In fact, it is critical that such policy considerations be clearly parsed in technology policy debates. Otherwise, the potential impact of ill‐​fitted changes to antitrust enforcement standards on innovation and speech could be overlooked, bringing harm to consumers as well as online services.

Breaking up firms to achieve changes to content moderation would likely backfire. Such actions come with no guarantee that smaller, general‐​audience platforms would choose different standards than the existing giants. What would be guaranteed, however, is that new entrants and smaller platforms would have fewer resources, making them unduly sensitive to the anti‐​consumer disincentives discussed in the section above.29 Such firms will lack state‐​of‐​the‐​art content moderation tools, including artificial intelligence and machine learning systems, and will have a much smaller number of content moderators available to respond to complaints about content. Their relative lack of resources will also diminish their ability to adapt their content moderation policies to novel issues, especially when greater cultural context is needed.

In these ways, antitrust is an especially ill‐​fitted tool for addressing concerns with content moderation. To the extent that antitrust investigations are driven not by the suitability of antitrust remedies to the concerns at hand, by rather by generalized animus toward big tech fueled by a potpourri of complaints, they amount to a misguided use of federal and state power. Such use of the sweeping law enforcement and investigative tools that the antitrust laws provide would constitute a serious abuse of government power. An online firm that does not engage in anti‐​competitive behavior can take little comfort in the knowledge that it would ultimately prevail in court; in the meantime, there are potentially years of an unnecessary and costly investigation and the associated costs and disruption to normal business operations. None of that will benefit consumers or improve competition.

Given the costs of these investigations to consumers, taxpayers, and internet firms of all sizes—as well as their likely failure to achieve the desired policy outcome—antitrust action should not be confused with the pro‐​competition policy. The current legal frameworks of which Section 230 is a fundamental part is robustly pro‐​competitive, and competition is the best medicine for the full range of policy issues in tech today. Policymakers must be careful to distinguish between the anti‐​competitive behavior that is appropriately addressed through antitrust enforcement and policy concerns for which antitrust is the wrong legal weapon.30

Conclusion

Section 230 is critical to allowing platforms of all sizes to carry user‐​generated content, thereby enormously expanding the opportunities for individuals to express themselves (and for others to gain the benefits of that content). The optimum regulatory response to criticism of the content that online services carry and the content moderation choices these services make is to provide the legal framework for a competitive market in which internet users have a broad range of choices. Revoking Section 230 would do the opposite. It would plunge platforms back into the moderator’s dilemma, potentially causing them to avoid user‐​generated content altogether because of the risk of open‐​ended liability that exceeds the risk tolerance of their investors. It would significantly hamper the ability of small firms and new entrants to compete with large market incumbents. The result would be an internet with fewer voices and fewer choices. Rather than mischaracterizing Section 230 as a special privilege for tech giants, policymakers would do well to understand that this law enables innovation and choice that benefits us all.

Citation

Huddleston, Jennifer. “Competition and Content Moderation: How Section 230 Enables Increased Tech Marketplace Entry,” Policy Analysis no. 922, Cato Institute, Washington, DC, January 31, 2022. https://​doi​.org/​1​0​.​3​6​0​0​9​/​P​A.922.