Social media managers cannot tolerate all speech on their platforms. They are obligated to maximize value for their shareholders. Leaving some kinds of speech on a platform would drive some users away. So social media managers establish rules intended to maximize the number of users. Managers also hire content moderators to enforce the rules. Users that get thrown off have no complaint. When they joined the platform, they agreed to the rules and to how they are enforced. End of story.
Except it’s not the end of the story. Social media managers do not seem to believe that mutual consent to rules and their application is enough to make content moderation legitimate. For present purposes, I simply accept this belief; we need not inquire into its validity. If consent is not enough, social media need other justifications for legitimacy. Some social media managers embraced a judicial model: due process would foster legitimacy. For example, Facebook instituted written rules whose enforcement could be ultimately appealed to an Oversight Board (OSB).
How might an appeals process be legitimate? The Charter, Bylaws, and Code of Conduct for OSB members mention the words “independent” or “independence” 28 times. Here are a few examples. The OSB is established by “an independent, irrevocable trust” which oversees administrative matters. The purpose of the OSB “is to protect free expression by making principled, independent decisions about important pieces of content…” OSB members are required to “exercise neutral, independent judgment and render decisions impartially.” Moreover, OSB members “must not have actual or perceived conflicts of interest that could compromise their independent judgment and decision-making.” The Bylaws say members “will exercise neutral, independent judgment and render decisions impartially.”
The adjective independent has many meanings. The most relevant here is “not subject to control by others.” Many fear social media content moderation will be dependent on tech companies’ financial priorities. The more successful social media are (and will be) owned by their shareholders. Their managers will have a duty to maximize value for those shareholders. What’s wrong with that? Critics say profit maximizing leads social media to tolerate speech that harms others. Social media seek to engage users and keep them on a platform, thereby maximizing revenue. Critics assert, however, that some speech that harms others also engages users. Social media can protect users only by failing to maximize revenue. In this way, it is argued, privately-owned social media are thought to face a potential conflict between their obligations to their shareholders and the independence of their content moderation (including an appeals process).
Nonetheless, we observe that a leading social media platform, Facebook, has created an independent process to govern its content moderation. Facebook’s managers are aware of their obligation to their shareholders. They must see independent content moderation as a means to fulfill that obligation, at least in the medium term. Perhaps they are wrong about that. But the managers have better incentives than their critics to pursue the best policies on this matter.
For now, we can assume independent content moderation need not trouble shareholders.
Facebook also seems to equate independence with impartiality. Indeed an entire section of the OSB members’ Code of Conduct discusses “independence and impartiality.” Merriam-Webster helpfully adds:
To be “partial to” or “partial toward” someone or something is to be somewhat biased or prejudiced, which means that a person who is partial really only sees part of the whole picture. To be impartial is the opposite.
The Code of Conduct discusses several sources of conflict of interest and thus partiality. Members should not take gifts. They cannot have worked for Facebook itself. The Code identifies two more unconventional threats to impartiality: “politicized or partisan public expression of support or criticism of a political party, political candidate, or elected official” and “public fundraising in support of a political party, political candidate, or elected official.” Board member judgment may be compromised by either economic or political interests. That makes sense. Money can corrupt judgment but so can partisanship; current politics would be the last place to look for impartiality.
Government officials certainly threaten the independence and impartiality of content moderation. Such officials are often intolerant of criticism and seek to censor it. In the United States, such criticism is protected by the First Amendment. But the U.S. Constitution does not protect speech in private forums like social media. Elected officials may be tempted to “persuade” social media managers to suppress disfavored speech. In this way, social media content moderation might become partial to government interests and dependent on elected officials.
Facebook’s Code separates its OSB and government: “Board members, staff, and their immediate family will not interact with government officials (including the immediate family members of government officials) regarding their service on the board and/or the cases that they are reviewing.” A reasonable requirement, but threats (or for that matter, offers) need not be given in person. The design of OSB also helps fend off the government. Facebook managers may not fire OSB members. Facebook has funded the OSB for six years. OSB members are likely to be far less interested than Facebook in social media regulatory issues. Indeed, OSB members may be unwilling to help Facebook with its political problem precisely because they see the Board’s independence as essential to its existence. But all of this might be supplemented by good public policy. In the United States, the independence of social media and their content moderation should have clear judicial support similar to the protections afforded editorial decisions at more traditional media. In nations where such protections do not exist, social media dependence on government officials could remain a problem.
The political problem goes beyond government action. Everyone knows that speech on social media has few protections from content moderation. Inducing social media managers to suppress particular kinds of speech may seem to offer substantial benefits not only to elected officials but also to groups interested in policymaking. If some speakers are not heard on social media, their opponents may gain a relative advantage in policy debates and in elections (an indirect way of affecting policymaking).
Consider an analogous example. Sugar producers seek protection from foreign producers. When they obtain protection, the price of their product and their profits rise. Such protection imposes relatively small losses (higher prices) on a large number of people (consumers). The costs of obtaining protection are incurred by a relatively small group who incur the costs of organizing for protection and receive substantial benefits that easily outweigh their costs. Consumers pay more for sugar then they would without trade protection; in general the aggregate costs of protection outweighs its benefits. However, the cost to each consumer is small enough that benefits of organizing to demand trade liberalization do not outweigh the costs of organizing to that end.
Organized interests supporting a policy or political agenda would also benefit from excluding competing ideas about policy. Their agenda would become relatively more influential on the platform and perhaps in a larger society. As with sugar, the losers would be the general public either on the platform or perhaps in society. The public would have fewer policy ideas and choices and less satisfaction with public policy and politics. Competition in ideas is not exactly the same as competition in products. But protectionism in both leads to analogous outcomes. The interests of large numbers of people are sacrificed to benefit the few.
Truly independent content moderation would be an answer to this rather conventional political bias. Content moderation partial to organized interests will see only “part of the whole picture” and impose losses on the many in favor of the few. Impartial moderation does the opposite. But no one should underestimate the challenge here. Social media content moderation could well avoid partiality to business and government interests only to become dependent on organized interests.
Independence and impartiality are vital to the legitimacy of content moderation. Content moderators should be independent of government officials, and perhaps the First Amendment may be enlisted in that task. But speech suppression online offers substantial rewards for group organization. It is difficult to see how a Code of Conduct in itself could preclude dependence on organized interests. The willingness to say “no” may be more a matter of character than compacts. But it will be essential to the success of social media in the next few years.