Last month the Supreme Court heard oral arguments in Gonzalez v. Google, a case about whether Section 230 protects platforms from liability for algorithmically recommended speech. This is the first time the Court has heard a case involving Section 230, and a bad ruling would remake the internet for the worse. Although many had feared that justices would use the opportunity to get at Big Tech, the Court was skeptical of petitioners’ counsel Eric Schnapper’s textual arguments and mindful of algorithms’ almost universal use in sorting information online.
Going into Gonzalez, there wasn’t a circuit split about algorithmic liability. The Second Circuit’s 2019 Force v. Facebook decision prompted proposals to amend Section 230 to exclude algorithmic recommendations from its protections. While the “Protecting Americans from Dangerous Algorithms Act” stalled in two consecutive congresses, its introduction seemed to signal that the debate over algorithmic liability had moved beyond interpretations of existing law. Thus, it seemed strange that the Court took up Gonzalez v. Google at all.
As the justices discovered that Schnapper wasn’t bringing them anything new, their questions began to sound like the conclusions reached by appeals courts in earlier cases. In Force, the Second Circuit held that Facebook couldn’t be treated as the publisher of pro-Hamas user profiles merely for suggesting the profiles to others because Facebook’s friend suggestion algorithm was neutral between lawful and unlawful interests. Facebook didn’t develop the user profiles’ content or recommend them in a way that contributed to their unlawfulness.
The algorithms take the information provided by Facebook users and “match” it to other users—again, materially unaltered—based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers. Merely arranging and displaying others’ content to users of Facebook through such algorithms—even if the content is not actively sought by those users—is not enough to hold Facebook responsible as the “develop[er]” or “creat[or]” of that content. (Force v. Facebook at 47)
In response to Schnapper’s claim that platforms are the creators of video thumbnails, Justice Thomas offered an account of platform agency in recommendations, or lack thereof, very similar to the reasoning of the majority in Force.
Justice Thomas: But the — it’s basing the thumbnails — from what I understand, it’s based upon what the algorithm suggests the user is interested in. So, if you’re interested in cooking, you don’t want thumbnails on light jazz. It’s neutral in that sense. You’re interested in cooking. Say you get interested in rice — in pilaf from Uzbekistan. You don’t want pilaf from some other place, say, Louisiana. I don’t see how that is any different from what is happening in this case. And what I’m trying to get you to focus on is if — are we talking about the neutral application of an algorithm that works generically for pilaf and it also works in a similar way for ISIS videos? Or is there something different?
Mr. Schnapper: No, I think that’s correct, but …
Schnapper’s further attempts to persuade the court that some workable line could be drawn between publishing, display, and recommendation which would render platforms the co-creators of recommended speech did not gain traction. Indeed, justices expressed confusion about what line Schnapper was attempting to draw no less than eight times. As Cato’s amicus brief notes, a clean line cannot be drawn because “if displaying some content more prominently than others is “recommending,” then recommending is inherent to the act of publishing.”
Read the rest of this post →