In its 2019 Force v. Facebook decision, the United States Court of Appeals for the 2nd Circuit held that Section 230 shielded Facebook from liability for speech selected for display by an algorithm. Many falsely assume that Force v. Facebook was wrongly decided or that the 2nd Circuit’s decision is an injustice that must be rectified with legislation. This view was voiced loudly in the House Energy and Commerce Committee last week, where witnesses echoed Judge Robert Katzman’s partial dissent in Force, arguing that “shielding internet companies that bring terrorists together using algorithms could leave dangerous activity unchecked.” Two bills before the committee, H.R. 2154, the “Protecting Americans from Dangerous Algorithms Act,” and H.R. 5596, the “Justice Against Malicious Algorithms Act of 2021,” are explicitly intended to overturn Force v. Facebook, making platforms liable for algorithmically selected user speech.

However, critics of Force v. Facebook have failed to seriously grapple with the effects of making platforms liable for speech selected or curated by algorithms. To understand the implications of this sweeping proposed change, it is worth revisiting plaintiffs’ arguments in Force.

The Force v. Facebook plaintiffs, families of victims of Hamas terrorism, alleged that pro‐​Hamas content had appeared in terrorists’ Facebook newsfeeds. They also alleged that Facebook’s “suggested friends” feature recommended Hamas supporters to one another. Plaintiffs argued that despite Section 230’s protections against liability for third‐​party content, Facebook should face liability for its newsfeed contents and friend suggestions. The court held that decisions about how to display content, including algorithms that neutrally weigh users’ “likes” and “follows” to recommend content, fall into the category of traditional editorial functions protected by Section 230. Facebook’s algorithmic processing of third‐​party speech, be it posts or profiles, does not make Facebook the author of that speech.

Closer inspection reveals that the recommendation algorithms at issue in Force are more tools than editors. Although the outputs of Facebook’s algorithms are protected as an editorial function, they do not resemble traditional editorial decisions. Crucially, the reasoned justifications that underlie editorial decisions do not appear in algorithmic selections. Editorial decisions can be justified by appeals to relevance and timeliness, while algorithmic friend suggestions are the product of pattern matching. Garbage (or the preferences of a would‐​be terrorist) in, garbage out. The intention that might give rise to responsibility is absent. These complications of the traditional editorial analogy recommend against legislating increased liability for algorithmic curation.

Unlike a newspaper editor, Facebook’s algorithm is responsive to users’ individual choices. I could send a letter to the editors of the NYT asking them to prioritize different stories. Although it is unlikely, my letter might convince the Times editorial board to make changes to the paper. However, the mere fact of my sending a letter will not alter the NYT’s front page as my liking a new page would alter my Facebook newsfeed.

Instead of editors, recommendation algorithms such as the “suggested friends” feature look more like tools. To suggest friends, Facebook processes a users’ posts, likes, and existing friends, comparing them to the other users’ constellations of interests. This matching function utilizes information provided by the user, and by potential friends, but Facebook does make judgements about the content of these signals. It processes this user provided preferences neutrally. As the 2nd Circuit explains in its Force v. Facebook decision:

“The algorithms take the information provided by Facebook users and “match” it to other users—again, materially unaltered—based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers.”

Thus, while a traditional editor selects content for display for particular reasons, Facebook’s recommendations are not the product of “decisions” as human beings make them. If Facebook’s algorithm recommends terrorists, or those who go on to become terrorists, to one another as friends, it does not do so on the basis of their interests in terrorism. Just as some constellations of interests may be associated with military service or speeding violations, some constellations may be associated with terrorism. This does not mean, however, that these clusters of interests are only associated with terrorism, or speeding. Buying aftermarket car parts and watching Cannonball Run videos might be associated with speeding, but it doesn’t mean that every purchaser viewer is a traffic law scofflaw.

As a result, rendering Facebook responsible for the actions of users connected by its algorithm will necessarily compel the platform to refrain from making suggestions based on an overbroad set of user signals. If Section 230 is modified to expose platforms to lawsuits for the actions of users connected by its algorithms, they would have to avoid matching users, or recommending groups and pages, when they share the wrong constellations of interests.

Facebook wasn’t accused of connecting potential terrorists because of their interest in terrorism, but on the basis of other shared interests. To avoid liability, Facebook would have to discard, or limit its algorithms’ use of, a wide range of user interests. In order to avoid potentially connecting Hamas supporters, Facebook might limit recommendations based on user interests in Palestinian nationalism and Iranian news sources. This amounts to viewpoint discrimination, a fact made even more clear when the examples come home. In an increasingly polarized America, hobbies are increasingly laden with political significance. Might the algorithm associate an interest in guns with right wing extremism? Could recommending a “Poetry Against Racism” night at the Vegan café connect Antifa supporters? Exposing platforms to liability for algorithmic matching would compel them to make these discriminatory distinctions.

These are no natural limiting principles to expectations that platforms should police algorithmic recommendations to prevent future violence or harm. Excluding would‐​be terrorists from algorithmic recommendations will require excluding lots of other people too. Social media platforms do not make choices when their algorithms make suggestions. Expecting platforms to make choices about which speech is safe for algorithmic recommendation under threat of suit would require Facebook to make more judgements about the value of speech. No one should want that, least of all Congress.