From the beginning social media managers have excluded content from their platforms. At first they did so intuitively. Only a few people moderated content; like Justice Potter Stewart’s view of obscenity, “the new governors” of speech knew what to exclude when they saw it. As the platforms grew, such judgments seemed too subjective and opaque. Content moderation teams sought to formulate general rules, published as Community Standards, that could be consistently applied. Some also spoke of their company’s values, an effort to go beyond the judgments of this or that employee. Perhaps values were needed to turn the cold text of Community Standards into living guidelines accepted by all. The move from individual intuition to “accepted by all” bespoke a need for legitimacy. The rules and their application needed support from users and others outside the platform.

Economic success deepened the legitimacy problem and threatened the companies’ commitment to “voice” or free expression. The most successful companies, like Google or Facebook, had grown far beyond the United States. Their content moderators wondered whether their desire to protect speech reflected their cultural backgrounds as Americans. This belief ran counter to growing cultural relativism in the developed world: if all cultures were equal, why should U.S. free speech ideals be applied outside its borders? Maybe social media’s global expansion required less weight for voice and more for other values.

In any case, appreciation of “voice” was waning in American culture, if not in its legal system. Organized interests demanded social media remove speech that would be protected under the First Amendment. Social media were not obligated to observe the First Amendment; they were not the government. The demands for suppressing speech fostered continually evolving codes of prohibited speech for various platforms. The leaders of social media companies had long professed support for speech. If they meant those professions, social media leaders needed a globally acceptable foundation for Community Standards that protected speech. 

International law (and its subset, international human rights rules) offers a plausible answer to social media’s quandary. In 1948, the United Nations adopted the Universal Declaration of Human Rights. Most of that Declaration became legally binding when the UN General Assembly adopted two international human rights treaties in 1966: the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social, and Cultural Rights. The U.S. ratified the ICCPR in 1992. I focus here on the ICCPR which purports to be law beyond borders, thus addressing one challenge for social media.

What about protections for speech? ICCPR’s Article 19 states “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” Insofar as social media looks to international law to inform their Community Standards, Article 19 commits them to strong protections for expression. But that’s not the whole story.

ICCPR also places positive obligations to limit speech on governments. Article 20(2) of the ICCPR states “Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law.” In other words, Article 20(2) requires governments who adopt the ICCPR to prohibit “hate speech.” Social media are not governments, do not make law, and did not ratify the ICCPR. But Article 20(2) might be taken to legitimate social media “hate speech” rules. After all, Article 20(2) has been ratified by many nations.

But not all. For example, when the United States ratified the ICCPR, the Senate’s “advice and consent” was subject to a reservation: “That Article 20 does not authorize or require legislation or other action by the United States that would restrict the right of free speech and association protected by the Constitution and laws of the United States.” Belgium and the United Kingdom also limited the scope of Article 20(2) in defense of free expression. Other nations, like Australia, reserved a right not to introduce new legislation on “hate speech” and other issues. Indeed, many nations objected to many aspects of the ICCPR (here’s a list of nations and their objections).

Article 20(2) faces another problem in the United States. One scholar has noted, “Where U.S. duties under a treaty conflict with rights protected in the U.S. Constitution, rights in the Constitution must prevail.” In Reid v. Covert (1957), the Supreme Court said it “would be manifestly contrary to the objectives of those who created the Constitution… to construe Article VI as permitting the United States to exercise power under an international agreement without observing constitutional prohibitions.” The same Court “has consistently ruled that [hate] speech enjoys First Amendment protection unless it is directed to causing imminent violence or involves true threats against individuals.”

Where does all this analysis leave social media content moderators? Article 20(2) instructs governments to ban “hate speech”; U.S. courts say the government may not ban “hate speech” in the United States. Perhaps neither instruction matters; social media are not governments, properly understood, so they are not strictly obligated by either Article 20(2) or the U.S. Constitution. But both ICCPR and U.S. law suggest more broadly that “hate speech” bans are both legitimate and illegitimate, beyond borders and within one border, respectively. The implications for the legitimacy of content moderation are unclear.

Yet we are not done with international rules yet. In June 2011, the U.N. Human Rights Council endorsed “Guiding Principles on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework.” These principles stipulate: “Business enterprises should respect human rights. This means that they should avoid infringing on the human rights of others and should address adverse human rights impacts with which they are involved.” Their “responsibility to respect” human rights “exists over and above compliance with national laws and regulations protecting human rights.” The rights in question may be found in several places including the ICCPR.

Article 20(2) requires action (not inaction) by governments. However, the section could be defined as a positive right to live under a government that has criminalized “any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.” It requires no great leap of imagination to conclude that social media could respect that “right” by banning “hate speech” from their platforms.

At this point, international law seems like a dead end for free speech. Governments are required to prohibit a broad and ambiguous category of speech while businesses are instructed to “respect” a putative right to be free of “hate speech,” a demand that could support banning a wide range of speech. The U.S. legislature and courts do not recognize such required prohibitions, but their reticence does not obligate social media companies incorporated in the United States. The ICCPR does include Article 19 which offers strong words favoring free expression. (It also provides other reasons to restrict speech that I have not mentioned like reputation and national security, but these limits are less controversial than “hate speech”). But in practice the liberal words of Article 19 seem undermined by the illiberal demands of Article 20(2).

And yet we have not examined all of Article 19. Article 19(3) states that free expression may “be subject to certain restrictions, but these shall only be such as are provided by law and are necessary.” The U.N. Special Rapporteur on the promotion and protection of the right to freedom of expression and opinion has fashioned a tripartite test to apply the words “as are provided by law and are necessary” in Article 19(3). This test subjects a restriction on speech to three conditions: legitimacy, legality and necessity/​proportionality. Legitimacy means the restriction may only pursue a limited set of public interests specified in ICCPR. Legality means a restriction “must be provided by laws that are precise, public and transparent; it must avoid providing authorities with unbounded discretion, and appropriate notice must be given to those whose speech is being regulated.” Finally, necessity and proportionality means a limitation on speech must be “the least restrictive means” to achieve the aforementioned public interest. This prong of the test means a regulator must pursue its goals at the least possible cost to speech. Often other policies that take a smaller toll on speech are available to governments and perhaps to social media. Presumably, since no government indicated a reservation to Article 19(3), this tripartite test applies to all government restrictions on speech and should be respected by private businesses including social media.

Where does all this leave online speech? The ICCPR supports free expression, requires a ban on “hate speech,” and permits restrictions on speech in pursuit of a limited number of important public interests. All such restrictions on speech, public or private, must be legitimate, legal, and the least restrictive means to a legitimate end. But does the tripartite test matter?

Maybe. For example, the Special Rapporteur mentioned above noted that “‘Hate speech’, a shorthand phrase that conventional international law does not define, has a double ambiguity. Its vagueness and the lack of consensus around its meaning can be abused to enable infringements on a wide range of lawful expression.”(For this reason, I have put the term “hate speech” in quotation marks throughout this essay). The legality prong of the tripartite test condemns vague restrictions on free expression. What Article 20(2) of the ICCPR giveth, the tripartite test almost always should take away. Or so free speech advocates may hope, not least if they have anything to do with social media content moderation.

Such is the case that international law might protect speech online in ways that legitimate social media content moderation. I leave for another day (and another post) the validity of this case.

Thanks to Evelyn Aswad for comments on an earlier draft of this post. Professor Aswad’s scholarship on these topics may be found here. This paper will be especially interesting for readers thinking through the topics covered in this post.