Earlier this month, the Apsen Institute released a report on “information disorder.” Amid the ongoing COVID-19 pandemic and a fraught and polarized political environment, the spread of conspiracies associated with healthcare and election integrity are of particular concern. The Aspen report suggests a range of policy recommendations, including amendments to Section 230 of the Communications Decency Act. In this post, I will analyze this recommendation, which I believe would do little to halt the spread of misleading content and raises significant constitutional concerns.

Before tackling the Aspen recommendations, a few definitions are in order. The Aspen report discusses both “disinformation” and “misinformation.” The report defines “disinformation” as “false or misleading information, intentionally created or strategically amplified to mislead for a purpose (e.g., political, financial, or social gain),” and defines “misinformation” as “false or misleading information that is not necessarily intentional.” This is a helpful distinction. Although both misinformation and disinformation can be harmful, they have different causes.

A foreign adversary may use social media to spread false information as part of an attempt to proliferate propaganda or worsen political divisions in another country. This is an example of disinformation. A Facebook user who posts content they believe to be true about the efficiency of bleach as a COVID-19 cure is spreading misinformation.

Section 230 of the Communications Decency Act shields “interactive computer services” such as social media platforms from liability associated with the vast majority of third party content. If you defame someone on Twitter the target of your defamation can sue you, but they cannot sue Twitter. The law, passed in 1996, has been at the heart of many contemporary debates about political bias in Silicon Valley and the spread of mis‐ and disinformation.

The Aspen report suggests two amendments to Section 230:

  1. “Withdraw platform immunity for content that is promoted through paid advertising and post promotion.”
  2. “Remove immunity as it relates to the implementation of product features, recommendation engines, and design.”

The first recommendation targets paid advertising, which has been used to spread disinformation and misinformation. During the 2016 American presidential election, Russian operatives used Facebook ads to spread disinformation and organize dueling protests in Texas. Preventing foreign adversaries from interfering in domestic elections is a worthwhile policy goal, but it is unclear that amending Section 230 in the way the Aspen report suggests would solve the problem.

If websites such as YouTube and Facebook were potentially liable for ads on their platforms they would have an incentive to either 1) remove all ads, or 2) screen all ads before they appear.

Prominent social media sites embracing the first approach is unlikely given that advertising is the primary source of income for YouTube, Facebook, and Twitter. More likely is that platforms would screen ads. Such an approach would impose a burden on firms that only powerful market incumbents such as Facebook and YouTube could tolerate.

It is possible that Congress could amend Section 230 in such a way that social media sites would have to ensure that advertisers adhere to a set of “best practices” in order to keep their liability immunity. Such practices could include asking advertisers to pay in a certain currency, reveal details about personal identities, etc. But such requirements are unlikely to deter motivated foreign adversaries that can easily work around them and are likely to impose costs on legitimate social media users seeking to place ads. When Facebook banned foreign issue ads, NGOs and charities that relied on American donations struggled to reach would‐​be supporters. Enforcement of such a policy by platforms with fewer resources would cause even greater collateral damage.

The second proposal is also not without its problems. The report does not suggest specific legislative language, but does point to the Protecting Americans from Dangerous Algorithms Act, legislation introduced by Reps. Malinowski (D‑NJ) and Eshoo (D‑GA), in its discussion of the second proposal. My colleague Will Duffield has already written about the flaws associated with that specific piece of legislation. As Will correctly noted:

The Protecting Americans from Dangerous Algorithms Act attempts to prevent extremism by imposing liability on social media firms for algorithmically curated speech or social connections later implicated in extremist violence. Expecting platforms to predictively police algorithmically selected speech a la Minority Report is fantastic. In practice, this liability will compel platforms to set broad, stringent rules for speech in algorithmically arranged forums. Legislation that would push radical, popularly disfavored, or simply illegible speech out the public eye via private proxies raises unavoidable First Amendment concerns.

As is often the case in debates about social media, the Aspen report takes aim at Section 230 when the First Amendment is a more appropriate target. Even if proposals to tackle misinformation and disinformation would be effective and would not favor market incumbents they would face a significant constitutional barrier. Most of the content of concern (i.e. anti‐​vaccination articles, election conspiracy theories, etc.) is legal. Attempts to stop the spread of such content via legislation implicates First Amendment concerns, even if pursued via Section 230 amendments.



Those worried about disinformation and misinformation have well‐​grounded concerns. Dangerous nonsense can and has spread across the Internet, and we know that foreign adversaries have sought to weaponize social media. Yet when considering these problems lawmakers should not look to amending Section 230. Amendments such as those discussed in Aspen’s report are unlikely to significantly halt or hamper the spread of the most dangerous misinformation and disinformation or pass constitutional challenge.