Last month the Supreme Court heard oral arguments in Gonzalez v. Google, a case about whether Section 230 protects platforms from liability for algorithmically recommended speech. This is the first time the Court has heard a case involving Section 230, and a bad ruling would remake the internet for the worse. Although many had feared that justices would use the opportunity to get at Big Tech, the Court was skeptical of petitioners’ counsel Eric Schnapper’s textual arguments and mindful of algorithms’ almost universal use in sorting information online.

Going into Gonzalez, there wasn’t a circuit split about algorithmic liability. The Second Circuit’s 2019 Force v. Facebook decision prompted proposals to amend Section 230 to exclude algorithmic recommendations from its protections. While the “Protecting Americans from Dangerous Algorithms Act” stalled in two consecutive congresses, its introduction seemed to signal that the debate over algorithmic liability had moved beyond interpretations of existing law. Thus, it seemed strange that the Court took up Gonzalez v. Google at all.


As the justices discovered that Schnapper wasn’t bringing them anything new, their questions began to sound like the conclusions reached by appeals courts in earlier cases. In Force, the Second Circuit held that Facebook couldn’t be treated as the publisher of pro-Hamas user profiles merely for suggesting the profiles to others because Facebook’s friend suggestion algorithm was neutral between lawful and unlawful interests. Facebook didn’t develop the user profiles’ content or recommend them in a way that contributed to their unlawfulness.

The algorithms take the information provided by Facebook users and “match” it to other users—again, materially unaltered—based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers. Merely arranging and displaying others’ content to users of Facebook through such algorithms—even if the content is not actively sought by those users—is not enough to hold Facebook responsible as the “develop[er]” or “creat[or]” of that content. (Force v. Facebook at 47)

In response to Schnapper’s claim that platforms are the creators of video thumbnails, Justice Thomas offered an account of platform agency in recommendations, or lack thereof, very similar to the reasoning of the majority in Force.

Justice Thomas: But the — it’s basing the thumbnails — from what I understand, it’s based upon what the algorithm suggests the user is interested in. So, if you’re interested in cooking, you don’t want thumbnails on light jazz. It’s neutral in that sense. You’re interested in cooking. Say you get interested in rice — in pilaf from Uzbekistan. You don’t want pilaf from some other place, say, Louisiana. I don’t see how that is any different from what is happening in this case. And what I’m trying to get you to focus on is if — are we talking about the neutral application of an algorithm that works generically for pilaf and it also works in a similar way for ISIS videos? Or is there something different?

Mr. Schnapper: No, I think that’s correct, but …

Schnapper’s further attempts to persuade the court that some workable line could be drawn between publishing, display, and recommendation which would render platforms the co-creators of recommended speech did not gain traction. Indeed, justices expressed confusion about what line Schnapper was attempting to draw no less than eight times. As Cato’s amicus brief notes, a clean line cannot be drawn because “if displaying some content more prominently than others is “recommending,” then recommending is inherent to the act of publishing.”

Justice Kavanaugh took this lack of distinction to its obvious conclusion, noting that Schnapper’s reading of the statute would render almost any method of organizing user speech unprotected recommendation, exposing intermediaries to a broad array of lawsuits.

Justice Kavanaugh: “… your position, I think, would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230. And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself.”

This recognition should echo beyond the Gonzalez petitioner’s recommendation claims. Critics of Section 230 have proposed a host of novel readings and clever pleadings to pare back the law. However, treating Section 230 as providing mere distributor liability, casting editorial decisions as unprotected design choices, or expecting content neutrality from intermediaries all read the statute to, in effect, defeat itself.

While conservative justices chose textual analysis over base hobby horses, the court’s liberals—with the exception of Justice Jackson—seemed wary of tackling algorithmic harms from the bench. Justice Kagan glibly observed that “we’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the Internet.” Cognizant of the limits of their knowledge, and unable to discern the line Schnapper wanted to draw, a significant majority of justices seem ready to rule in Google’s favor.

However, this doesn’t mean that everything went right. There were two exchanges which provoked particular concern. In the first Google counsel Lisa Blatt seemed to endorse the Henderson test, a recent interpretation of Section 230 by the Fourth Circuit that reads the statute as only applying to claims concerning platforms’ hosting of unlawful speech, or “some improper content within their publication.” While Gonzalez concerns the hosting and recommendation of improper speech, some lawsuits against platforms fault their failure to retain content or presentation of merely inaccurate information. While Henderson might protect Google here, its adoption would narrow Section 230 in other contexts.

The other perturbing exchange concerned platform “neutrality” and “neutral tools”. At several points Justice Gorsuch seemed to misapprehend the relevant kind of neutrality. While Gorsuch took the term to mean content neutrality or neutrality between purposes, earlier decisions use “neutral tools” to describe features which merely have both lawful and unlawful purposes.

Justice Gorsuch: “When it comes to what the Ninth Circuit did, it applied this neutral tools test, and I guess my problem with that is that language isn’t anywhere in the statute, number one.”

“And another problem also is that it begs the question what a neutral rule is. Is an algorithm always neutral? Don’t many of them seek to profit-maximize or promote their own products? Some might even prefer one point of view over another.”

Justice Gorsuch is right that the neutral tools test is not a part of Section 230’s statutory language. However, it helps courts to determine if litigated content is “information provided by another information content provider” or, if the platform has actually contributed to the content’s unlawfulness enough to become a co-author. Although the test works better for tools that are actively employed by users, rather than employed by websites to display content, the neutral tools test does not cut against YouTube’s algorithmic recommendations. The Wisconsin Supreme Court provides a succinct summary in Daniel v. Armslist.

The concept of “neutral tools” provides a helpful analytical framework for figuring out whether a website’s design features materially contribute to the unlawfulness of third-party content. A “neutral tool” in the CDA context is a feature provided by an interactive computer service provider that can “be utilized for proper or improper purposes.” Goddard, 640 F. Sup. 2d at 1197 (citing Room​mates​.com, 521 F.3d at 1172).

The 2008 Ninth Circuit case Fair Housing Council of San Fernando Valley v. Room​mates​.com, LLC provides examples of both protected neutral tools and unprotected features without lawful purposes. Roo​mates​.com required new users to create a profile, submit information about their race and gender, and select a preferred roommate race. Because the platform required users to submit unlawfully discriminatory preferences, it contributed mightily to the unlawfulness of the resultant discriminatory profiles. In contrast, Roomates.com’s “additional comments” box could be filled with any sort of roommate preference, whether lawful or unlawful. Thus, Section 230 shielded Roo​mates​.com from claims about the content of profiles’ “additional comments”, but not from claims about the platform-mandated discriminatory racial preferences.

The neutral tools concept also helps to illustrate why Section 230 was created to protect online speech intermediaries. Many useful objects can be used for lawful and unlawful purposes. The creators of traditional tools can’t police misuses of their creations. Speech intermediaries can, but at significant cost to legitimate speakers. Thus, Section 230 protects these neutral tools when they are misused to allow creators to offer digital speech tools as freely as they can offer pen and paper or printers.

Although oral arguments went about as well as they could have, the internet still waits with bated breath for the Court’s opinion. The best outcome would be for the court to dismiss Gonzalez as improvidently granted and decide the matter in Twitter v. Taamneh, a related case about the scope of the Anti-Terrorism Act. However, at arguments in Taamneh the court did not seem to clearly favor of any one conception of “substantial assistance”. Therefore, a narrow Gonzalez opinion that reifies Section 230’s protection of algorithms is more likely. As long as such an opinion is carefully written, it will avoid harming the online ecosystem that Section 230 has fostered.