I submitted the following public comment to the Oversight Board regarding their review of Facebook’s decision to indefinitely suspend President Donald Trump.
2021–001-FB-FBR Public Comment
Will Duffield, Policy Analyst, Cato Institute
This public comment addresses deficiencies in Facebook’s initial justification for its suspension of President Trump and examines how the current broad application of the newsworthiness principle to politicians undermines the legitimacy of Facebook’s Community Standards.
Facebook had ample justification to suspend President Trump after the Capitol riot under its prohibition on the incitement of violence. Rioters said their actions were justified by claims that the election results were illegitimate, the product of an elite conspiracy to deny President Trump a second term. President Trump continued to endorse claims of a stolen election even while condemning the day’s violence. Given the demonstrated capacity of these claims to inspire violence (indeed, how else should one respond to the usurpation of democracy), Trump’s persistence in echoing them in his capacity as president would have justified suspending him under policies against speech that incites violence.
While Facebook’s incitement policy is narrowly tailored to calls for violence, when Facebook expanded its election integrity measures in the weeks after the riot, it did so to “stop misinformation and content that could incite further violence.” While new prohibitions on “stop the steal” content were justified by Facebook’s policy against Coordinated Harm, these restrictions limited a particular sort of claim that Biden’s election was illegitimate, on the grounds that it could incite further violence. A similar judgement about the likely effects of President Trump’s stolen election claims would therefore seem best justified by Facebook’s prohibition on the incitement of violence.
Instead, Facebook relied on its Dangerous Individuals and Organizations policy to justify Trump’s indefinite suspension. While the Capitol Riot fits within Facebook’s definition of a violating event, the president’s posts did not explicitly praise the riot or rioters. Crucially, any ongoing risk associated with Trump’s Facebook account was rooted in his claim that his loss in the 2020 election was illegitimate. It was this claim, not alleged praise or support of the widely condemned riot, that risked inciting further attempts to prevent President-elect Biden from taking office.
Does this mean Facebook should maintain Trump’s suspension, or make it permanent, now that he has left office? Trump’s status as president mattered. The president’s speech carries unique authority, and public statements may be made in concert with official orders. After January 6th, while Donald Trump still occupied the Oval Office, many feared he would use social media to organize an extralegal attempt to remain president. Now that he has left office, and Joe Biden is president, this eventuality has been foreclosed. While Donald Trump still maintains a prominent position in Republican politics and the minds of his voters, he lacks the ability to credibly contest the presidency.
Thus, while exigency and Facebook’s prohibition on speech that incites violence may have justified the initial ban, its maintenance would be punitive rather than preventative. Given that Facebook did not previously bar claims of a rigged or stolen election as incitement, it would seem procedurally deficient to render Trump’s current suspension permanent. This is not to say that Donald Trump has not broken Facebook’s rules. He has, repeatedly. However, Facebook has long tolerated his violations under its newsworthiness policy.
Since 2016, Facebook has allowed content it subjectively deems newsworthy to remain on the platform, even if it would otherwise violate Facebook’s rules. There are instances in which this policy makes a great deal of sense. Images such as “Napalm Girl,” a photo of a nude, burned child from the Vietnam war, are moving illustrations of the horrors of war precisely because of their disturbing content. However, when this policy is applied to the violative speech of public figures or elected officials, it amounts to making some users more equal than others. In a 2019 Newsroom post, Facebook VP of Global Affairs Nick Clegg writes, “we generally allow [politicians’ speech] on the platform even when it would otherwise breach our normal content rules.” If a speaker’s status as a government official or celebrity makes their every utterance newsworthy, the newsworthiness exception serves as a formalization of Donald Trump’s claim that “when you’re a star, they let you do it.” It is important that public access to information not become a fig leaf for power.
Indeed, the fact that this privilege has been balanced against the risk of incitement provides one more reason to treat Facebook’s initial suspension of President Trump as an attempt to prevent the incitement of further violence. In the 2019 post, Clegg states that “content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value.”
However, this two-tiered system creates problems not easily ameliorated by sudden suspensions down the line, particularly when these problems stem from deeply held beliefs cultivated over months or years. Allowing some subset of Facebook users to systemically ignore Facebook’s rules undermines the legitimacy of the rules. If the violation of Facebook’s rules by private users with small followings may lead to harm, breaches by public figures with large followings will likely lead to greater harm. Subjectively violable rules are incredibly hard to justify.
Furthermore, while there is a public interest in hearing what our elected representatives have to say, that interest does not extend to affording politicians greater speech rights than others vis a vis Facebook’s rules. The Senate has sanctioned “disorderly language” in debates since America’s founding. So long as the rules are applied fairly, limiting the sort of language that politicians may employ on Facebook does no harm to the public interest. Unfortunately, the current subjectively applied newsworthiness exception makes it impossible to establish that any set of rules are being uniformly applied to politicians or heads of state. A content-based, rather than speaker-based, model of newsworthiness would improve this situation. A more explicit rubric for making newsworthiness determinations would improve it further.
Finally, Facebook, and the Oversight Board, must avoid adopting the mistaken belief that the world can or should be controlled through content moderation. While the intention behind these efforts is noble, in attempting to ameliorate state failures to physically secure the capitol, moderators adopted duties that requiring traditionally unacceptable tradeoffs between safety and voice. While the state cannot respond to budding insurrection by prohibiting provocative speech, mass gatherings, travel, hotel bookings, and the advertising of arms, moderators across a host of major social media platforms did so.
It is one thing to attempt to prevent the misuse of a social media platform. It is another to attempt to foreclose real-world harms through denial of access to a social media platform. To the extent that platforms, or their overseers, attempt to do so, they will adopt unrealistic duties that mask unpleasant tradeoffs.