Like almost every week, Facebook has been in the news. Much has been said about their earlier decisions regarding the speech of Russian agents, much of it negative. Amid that debate, you might overlook Mark Zuckerberg’s latest post about Facebook’s content moderation work. Don’t. Facebook’s moderation decisions impact speech across the globe and Zuckerberg’s post is an intriguing and important statement of the company’s position.


While the post announces changes to Facebook’s appeals process, for now I will focus on the ideas and values informing their policies about online speech.


We make tradeoffs among values all the time, even tradeoffs involving freedom of speech. While free speech is a fundamental value in the United States, it nonetheless may be curtailed to prevent violence, suppress obscenity, and protect a person’s reputation, among other reasons. Over time, these other values have come to matter less relative to free speech. Speech must directly and immediately lead to violence to be restricted; that does not happen much. Courts gave up on defining obscenity and made it difficult for public figures to win libel judgments. As a constitutional matter, we limit free speech in order to realize other values; in practice, speech almost always trumps other concerns.


At least in the public sphere. We could by law or custom demand that everyone, everywhere vindicate freedom of speech. But we don’t. I have the power to exclude speakers who ask irrelevant questions at Cato forums (though I rarely exercise it). Facebook has the same power to remove the speech of individuals or organizations from their platform. As a nation we choose private governance of private property over free speech when these values come into conflict.


That brings us to Mr. Zuckerberg. Facebook protects less speech than the U.S. Supreme Court. What values matter more to Facebook in some instances than free speech? Zuckerberg believes that Facebook should “balance the ideal of giving everyone a voice with the realities of keeping people safe and bringing people together.” Safety comprises, among other things, protection against terrorism and self-harm. “Bringing people together” implies avoiding social polarization by restricting hate speech and misinformation, the latter perhaps condemned less for its falsity and more for its divisiveness. Speech that contravenes these values constitutes “harmful content” that may be removed.


More abstractly, Facebook values community a lot. It protects its members against external and internal threats and seeks to foster unity. This concern for unity (and worries about division) marks a sharp departure from First Amendment doctrine. Limiting speech to preclude violence seems more familiar to students of liberty than restrictions in pursuit of social harmony. After all, divisive and polarizing speech (including “hate speech”) enjoys full protection by the courts. In the classic struggle between the individual and community, Facebook cares more about the latter than say, the average classical liberal, or indeed, the average free speech advocate.


You might think Facebook’s values reflect the challenges of building a lasting global business. Facebook users may prefer safety and unity over free speech. Community preferences and business logic might well go together. No doubt this is part of the story. But it is not the whole story. Facebook has a commitment to community that goes beyond profitability.


Zuckerberg’s post offers a novel discussion of “borderline content” which is defined as “more sensationalist and provocative content [which]… is widespread on cable news today and has been a staple of tabloids for more than a century.” Such content does not violate Facebook’s community standards; it toes the line. Facebook restricts the distribution and virality of such content but does not remove it. Why? “At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.” The latter is the concern of a businessman; the former are the values of a citizen who believes his company has a social obligation to foster civic unity at some margin.


These tradeoffs and the underlying philosophy suggest two problems for Facebook. First, the traditional problem of drawing lines. Racial and religious invective divides society and thus may be removed from the platform. Easy choices, you might think. But consider harder questions. Many people found The Bell Curve by Charles Murray and Richard Herrnstein racially offensive. They specifically deplored its treatment of IQ and race. Facebook’s Community Standards specifically preclude negative mentions of either. Should speech favoring that work be removed? On the other hand, a couple of years ago prominent law professor Mark Tushnet argued that President Hillary Clinton should have treated conservatives and Republicans as Germany and Japan were treated after 1945 (“…taking a hard line seemed to work reasonably well in Germany and Japan after 1945.”) Given that “taking a hard line” toward Germany after 1945 arguably led to the deaths of at least 500,000 people, should speech like Tushnet’s recommendation be banned from Facebook going forward? It will be hard to draw these lines consistently at scale while avoiding the appearance of political bias.


Second, Facebook’s aspirations may conflict with the expectations of investors. Zuckerberg says Facebook research indicates that people want to engage with borderline content. If Facebook is a business, and businesses give customers what they want, why make it harder for customers to get the permitted content they want? More generally, Facebook managers may be mistaken about “borderline content” and about their audience. The economist Robin Hanson recently noted: ordinary people “are more interested in gossip and tabloid news than high-status news, they care more about loyalty than neutrality, and they care more about gaining status via personal connections than via grand-topic debate sparring. They like wrestling-like bravado and conflict, are less interested in accurate vetting of news sources, like to see frequent personal affirmations of their value and connection to specific others, and fear being seen as lower status if such things do not continue at a sufficient rate.” I admire Zuckerberg’s desire to improve public discourse. How widely shared is that ambition? Does our shared aspiration reflect the social norms of Facebook users? If not, should the CEO’s hopes trump his customers wants?


A final point. Much has been made of liberal bias at Facebook. Zuckerberg himself has noted that the environs of Menlo Park are quite left-leaning. It’s also true that many on the left do emphasize community over the individual as a matter of philosophy. But the community values mentioned in Facebook’s post are not necessarily those of the left. Conservatives have, at various times, argued for government action to protect community values against noxious speech. They have tended to lament divisions and praise the larger social whole (think of their view of patriotism and “our country”). Facebook’s idea of community may be either left, or right, or neither. What it cannot be is consistent with a philosophy that always accords free speech priority over social unity.