Consider an argument for denying First Amendment protection to movies and video games. Human beings, we all agree, have constitutional rights—but mere machines do not. When the computer in your game console or DVD player “decides” to display certain images on a screen, therefore, this is not protected speech, but merely the output of a mechanical process that legislatures may regulate without any special restrictions. All those court rulings that have found these media to be protected forms of expression, therefore, are confused efforts to imbue computers with constitutional rights—surely foreshadowing the ominous rise of Skynet.


Probably nobody finds this argument very convincing, and it hardly takes a legal scholar to see what’s wrong with it: Computers don’t really autonomously “decide” anything: They execute algorithms that embody decisions made by their human programmers. (If, one day, we develop advanced Artificial Intelligences that really are effectively autonomous speakers, their constitutional status will be a fascinating and difficult question). Movies and video games are made by teams of human beings, whose expressive choices are merely executed and transmitted by computers. You can’t somehow “just” regulate the computer without effectively restricting human expression at the same time. Simple, obvious.


Yet writing in The New York Times, legal scholar Tim Wu makes a surprisingly similar argument with respect to search engines and various kinds of data sharing on social networking sites or online marketplaces. Wu is implicitly responding to those like Eugene Volokh, who claim that the First Amendment does indeed constrain legislatures who might seek to regulate these activities for the sake of privacy or informational “fairness.” Is there any way to distinguish Wu’s argument from my imagined, clearly invalid one? Wu hints at one:

The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)

This doesn’t seem all that easy to me, however. In many video games, the exact progression of any particular play-through will often be partly (and in some cases substantially) randomized. In performances of “aleatoric music,” perhaps most famously associated with the avant garde composer John Cage, a human artist provides an algorithmic structure that leaves the “specific content” of the piece to be determined by chance—whether provided by a computer or some other pseudorandom mechanism—and therefore varying in each performance. All compositions, for that matter, are essentially “algorithms” whose specific character as a heard expression, will be determined by a performer, whether human or machine. And do we really think a blog that rounded up links and produced summaries of important news stories about privacy or human rights could be unceremoniously shut down if it were assembled and published by a scraping algorithm rather than active human curation? What degree of human intervention in the running of an aggregation algorithm would transform it into protected speech? Would it be enough to manually delete irrelevant links incorrectly harvested by the algorithm, or tweak poorly-worded summaries, before each post went live? Must the programmer be careful not to make her algorithm too good, lest she render such intervention unnecessary and surrender First Amendment protection?


If Wu’s argument seems in any way plausible, it’s because his computer/​human distinction is not actually doing any of the heavy lifting in his piece. When he writes that various kinds of information manipulation are “only indirectly related to the purposes of the First Amendment,” the force of the argument depends entirely on how far we agree with that normative judgment about those classes of information processing, not the means by which they’re accomplished. It’s clear that Wu—in contrast to Volokh—believes that the commercial sale of databases of personal information is not “speech” in the sense intended by the First Amendment. Would he really feel any differently if those databases were compiled and transmitted by hand, rather than electronically? Or, turning to the search context, would it really matter if the task of recommending a list of Web sites deemed relevant to a topic (or especially family-friendly, or whatever other feature a company might be advertising) were carried out by humans in a data center in Bangalore, rigorously following a list of criteria developed by some different group of human analysts, and encouraging them to recommend partners and paid advertisers over their competitors whenever possible? Again, of course not. The real argument—everything Wu says that has any persuasive force to it—depends on the character of the activity in question, not its implementation.


So why focus on the computer/​human distinction at all? First, because courts are justifiably reluctant to declare whole classes of expressive conduct beyond the bounds of the First Amendment: The whole point of a presumption of free speech is to avoid having to make ad hoc determinations about the “social value” of various kinds of expression. Second, because the character-based arguments on their own, relying on a sharp distinction between commerce and speech that’s hard to draw in a blurry world, are all subject to strong counterarguments, and probably insufficient to overcome the presumption in favor of expression.


As a slogan, “Free speech for humans, not for computers” sounds pithy and appears to provide a bright-line standard that avoids the hard and messy questions involved in an analysis grounded in the nature of the speech (or, more neutrally, “information manipulation”) itself. But again, that distinction does no independent work in the argument. If Wu wants to make the case that certain categories of conduct involving information deserve diminished protection, whether directly executed by humans or indirectly with the aid of computer processors, he should do so. The attempt to shift the focus to a red-herring distinction between human and computer “expression” betrays a recognition that this case, stated clearly on its own terms, is a weak one.