Over the past two decades, social media has drastically reduced the cost of speaking, allowing users the world over to publish with the push of a button. This amazing capability is limited by the fact that speakers do not own the platforms they increasingly rely on. If access to the platforms is withdrawn, speakers lose the reach that social media grants. In America, government censorship is limited by the First Amendment. Nevertheless, seizing upon the relationship between platforms and speakers, government officials increasingly demand that platforms refrain from publishing disfavored speech. They threaten platforms with punitive legislation, antitrust investigations, and prosecution. Government officials can use informal pressure — bullying, threatening, and cajoling — to sway the decisions of private platforms and limit the publication of disfavored speech. The use of this informal pressure, known as jawboning, is growing. Left unchecked, it threatens to become normalized as an extraconstitutional method of speech regulation. While courts have censured jawboning in other contexts, existing judicial remedies struggle to address social media jawboning. Amid the opacity and scale of social media moderation, government influence is difficult to detect or prevent. Ultimately, congressional rulemaking and the people’s selection of liberal, temperate officials remain the only reliable checks on this novel threat to free speech.

Introduction

What Is Jawboning?

Jawboning is the use of official speech to inappropriately compel private action. Jawboning occurs when a government official threatens to use his or her power — be it the power to prosecute, regulate, or legislate — to compel someone to take actions that the state official cannot. Jawboning is dangerous because it allows government officials to assume powers not granted to them by law. The capriciousness of jawboning is also cause for concern. Individual officials can jawbone at will, without any sort of due process, by opening their mouths, taking up a pen, or tweeting.

Government officials’ demands exist on a continuum. They may be more or less specific and accompanied by more or less severe threats. Colloquially, jawboning is used to describe inappropriate demands made of private actors by government officials. However, as a matter of law, jawboning requires an explicit threat. Government threats, but not government requests, can transform private conduct into government action. The more specific the demand, the easier it is to identify coerced intermediary action and affected speakers. This standard makes it hard to challenge vague threats or contest platforms’ decisions to remove speech because they “know what’s good for them” in light of political expectations. Courts have censured police and prosecutors for threatening speech intermediaries with prosecution for their carriage of lawful, but unwanted, speech.1 However, suits treating intermediaries as state actors because they received vague threats or mere requests have failed.2

Government officials are clearly engaged in jawboning when they back demands for private action with threats, but the line between demands and requests is blurry, and often subjectively drawn. What constitutes a threat is similarly contested. This paper focuses on the use of jawboning to control speech on social media in ways prohibited by the First Amendment. With some exceptions, government is constitutionally prohibited from censoring Americans’ speech. Nevertheless, government can suppress disfavored speech by dissuading intermediaries, such as publishers or telephone companies, from carrying it. Threats can be used to compel other sorts of private action too. Nearly anything that can be achieved via legislation — and many things that cannot — can be accomplished by bringing informal pressure to bear on the right middlemen. Although courts have identified and censured jawboning in the past, it has been given a new life in the internet age.

Social Media Logos
Featured Event

How Does Government Jawboning Threaten Speech?

October 26, 2022 • 12 — 1:30 PM EDT

The public’s reliance on social media platforms has created new opportunities for censorship by proxy, despite the First Amendment’s prohibition on government speech regulation. Will Duffield’s recent policy analysis “Jawboning against Speech: How Government Bullying Shapes the Rules of Social Media” details how government officials increasingly use informal pressure to compel the suppression of disfavored speakers on platforms like Facebook and Twitter.

Please join Will Duffield, Adam Kovacevich, and Jenin Younes at the Cato Institute or online for a conversation about this novel threat to free speech.

Jawboning in the Past

The term “jawboning” was first used to describe official speech intended to control the behavior of businessmen and financial markets. John Kenneth Galbraith noted that the activities of the World War II Office of Price Administration and Civilian Supply were called jawboning. He wrote: “legislative authority was lacking, and only verbal condemnation could be visited on violators.… to describe such oral punishment, the word jawboning entered the language.”3 In the 1960s and 1970s, presidents struggled to control price inflation. In addition to imposing legal price and wage controls, presidents attempted to cajole businesses into refraining from raising prices. President Kennedy used threats of Department of Justice and Federal Trade Commission investigations and blacklisting from government contracts to dissuade steel producers from following through with a proposed price hike.4 When it entered the English lexicon, jawboning was a derisive reference to Samson’s vengeance on the Philistines in the book of Judges, where he proclaims, “with the jaw of an ass have I slain a thousand men.” Because the 1970s were a time of greater biblical literacy, President Carter’s televised admonishments were understood to have a similar effect on the nation’s bankers and businessmen. Writing in Barrons, Thomas G. Donlan recalls, “It was said of Jimmy Carter, as of other presidents and their tame economists, that they were like Samson in the Bible, because they could slay 10,000 businesses with the jawbone of an ass.”5

The “jaw” in jawboning refers to the use of speech to compel private activity. In most cases, any threatened government action is punitive and unrelated to the demand, and the demand cannot ordinarily be fulfilled by government. Government cannot directly prevent prices from rising, but it may be able to discourage businesses from raising their prices by threatening to deny government contracts to noncompliant firms.

Whether jawboning threatens liberty depends on the nature of the government demand, who it is aimed at, and what they are asked to do. On the anodyne end of the spectrum, in foreign policy, speech that is deemed to be jawboning is often indistinguishable from diplomacy. In 2004, presidential candidate John Kerry criticized President Bush for failing to follow through on his promise to jawbone foreign oil producers to lower prices:

Mr. McAuliffe, Mr. Vilsack and Mr. Kerry each cited a comment Mr. Bush made in 2000 while campaigning in New Hampshire, when he said that as president, he would ‘‘jawbone’’ leaders from Saudi Arabia and other oil-producing nations to pressure them to expand oil production.6

More recently, in the opening months of the coronavirus pandemic, the Federal Reserve jawboned markets by pledging to purchase corporate debt, if necessary, rather than actually purchasing corporate debt. True or not, the Fed’s stated claim that it was willing to purchase the debt induced private purchasers to reenter the market.7

Financial jawboning is not a threat to free speech, but it does raises concerns about the truth of government speech, procedural due process, and the injection of short-term political concerns into economic policy. However, when aimed at unwanted speech, jawboning directly threatens Americans’ expressive freedoms.

Jawboning Today

It is one thing for the president to engage in browbeating about higher prices, as Ford, Nixon, and Carter did. It is another for the president to demand that social media firms remove the lawful, albeit disfavored or false, speech of American citizens. A recent spate of demands by the Biden administration aimed at Facebook include all the traditional elements of jawboning and illustrate how it can be used to pressure social media platforms.

In August 2021, President Biden accused Facebook of “killing people” by hosting speech questioning the safety and efficacy of coronavirus vaccines. Jen Psaki, Biden’s press secretary, insisted that “Facebook needs to move more quickly to remove harmful violative posts,” and called for cross-platform action, saying “you shouldn’t be banned from one platform and not others for providing misinformation.”8 Surgeon General Vivek Murthy issued an advisory on health misinformation, including eight guidelines for platforms.9 On its own, the advisory would have been inoffensive, but statements by other members of the administration suggested sanctions for noncompliant platforms. White House communications director Kate Bedingfield completed the jawboning effort during a Morning Joe interview. Prompted by a question about getting rid of Section 230, she replied, “we’re reviewing that, and certainly they should be held accountable, and I think you’ve heard the president speak very aggressively about this …”10 By gesturing at changes to the intermediary liability protections that social media platforms rely on, Bedingfield added a vague threat to the administration’s demands.

The Biden administration’s demands of Facebook included all the traditional elements of jawboning. The administration requested that Facebook, a private speech platform, remove the accounts of particular users. The executive has no authority or ability to police misinformation on Facebook, but Facebook can remove what it wants to. By raising the specter of changes to, or the repeal of, Section 230, the Biden administration made a roundabout threat. Repealing Section 230 would not make vaccine misinformation unlawful, but it would harm Facebook by exposing it to litigation over its users’ speech. By demanding the removal of misinformation and threatening repeal, the administration sought to bully Facebook into removing speech that the government couldn’t touch.

Even without the threat to change Section 230, the administration’s insistence that Facebook remove anti-vaccine speech might be seen as jawboning because of the executive’s vast regulatory powers. There are many ways the administration can potentially punish Facebook. The Department of Justice is currently pursuing an antitrust case against Facebook, and other executive branch agencies can harm the platform in myriad ways. Everything from workplace discrimination claims to environmental review of cable deployment plans gives the administration opportunities to interfere with a noncompliant Facebook. Absent a threat, this potential for interference is unlikely to sustain a First Amendment claim, but it can certainly influence platforms’ behavior. As a result, we can think of the potential for interference as an aspect of colloquial, if not legal, jawboning.

The administration was specific about whose speech it wanted removed. Jen Psaki echoed a report by the Center for Countering Digital Hate titled “The Disinformation Dozen,” saying “There’s about 12 people who are producing 65% of anti-vaccine misinformation on social media platforms.”11 The report identifies a dozen accounts allegedly responsible for the lion’s share of vaccine misinformation on Facebook.12 While the claims about vaccines identified in the report are almost certainly false, they are the constitutionally protected expression of 12 Americans.

Because specificity implicates the speech rights of particular Americans, the administration’s demands are even more clearly an instance of jawboning. While government may criminalize certain categories of speech, the president has never been allowed to single out particular speakers for suppression. Thus, the particularity of his administration’s demands places them squarely in the category of actions that government cannot lawfully accomplish for itself.

The administration’s push contains most traditional elements of jawboning — a specific demand by government officials for something government can’t do itself, aimed at a private firm, and backed by a threat, albeit a somewhat vague one. The episode illustrates how government can attempt to employ private intermediaries in censorship. It is a high-profile example of a growing trend. But this informal bullying approach to internet speech governance did not appear out of thin air. It is a response to abundant digital speech bound by the letter, but not the spirit, of the Constitution.

Why Is This Happening?

In order to fully understand, and hopefully limit, social media jawboning, it is important to know why it happens. Around the world, the internet, and social media in particular, has dramatically lowered the cost of speaking, allowing more people to talk more about more things. University of California, Los Angeles, law professor Eugene Volokh calls this phenomenon “cheap speech.”13 Some newly empowered speakers discuss topics that are disfavored or taboo within their societies, thus spurring demands for censorship.

In many countries, this deluge of cheap speech has been met by government censorship. In Turkey, intermediaries can be ordered to remove content that “offends Turkishness.”14 This trend is not limited to autocratic countries — Germany’s Network Enforcement Act (NetzDG) law requires platforms to remove content that violates local hate-speech laws.15 However, in the United States, the First Amendment prohibits government censorship of unpopular views. Cheap speech has provoked censorship demands in America, just as it has elsewhere. But, because of the First Amendment, these demands cannot be satisfied by legislation.

The lack of constitutional support has not dimmed demands for censorship. Instead, these demands are expressed as jawboning. The modern internet is an internet of intermediaries. Few internet users operate their own servers or websites. Instead, they rely on social media platforms to host their speech. Rather than trying to pass laws requiring social media censorship, which would be struck down by the courts, some politicians have learned to lobby social media firms directly. They often appeal to platforms’ sense of civic responsibility, or claim that some disfavored speech violates their community standards. When politicians threaten to punish firms that fail to heed their requests, this lobbying becomes jawboning. Recognizing jawboning as an outlet for extralegal censorship demands doesn’t excuse it, but it does explain why jawboning has become so prevalent in the United States. Within the confines of our Constitution, jawboning is the path of least resistance for censorship demands. Courts should take jawboning seriously precisely because it is one of the few effective methods of censorship currently available to government officials.

Jawboning is not the only response to cheap speech that deputizes intermediaries to do what government cannot. Legislation that exposes intermediaries to liability for hosting disfavored speech also attempts to leverage platforms’ power as gatekeepers. Targeted changes to intermediary liability protections may be more constitutionally permissible than jawboning, but they are another outlet for censorship demands that government cannot satisfy directly. The Stop Enabling Sex Trafficking Act, or SESTA, was ostensibly passed to combat sex trafficking but exposed platforms to liability for hosting a variety of prostitution-related speech, prompting platforms to remove it.16 Other proposals would expose platforms to liability for claims related to firearms advertising or whatever the Secretary of Health and Human Services deems “health misinformation.”17 Government cannot prohibit this speech or prohibit platforms from hosting it. However by exposing intermediaries to liability specifically for claims relating to disfavored speech, these proposals would raise the relative cost of hosting such speech. Like jawboning, targeted changes to intermediary liability protections are a constitutionally constrained response to cheap speech.18 Although both reactions are responsive to constitutional limits, they nevertheless threaten free expression.

Why Social Media Jawboning Is a Problem

Because jawboning is the most readily available or practical method of internet censorship available to American government, there is a real danger that it will become a common method of informal speech regulation. Americans are deeply divided on issues such as gun control and health misinformation, so it is hard to gather a majority that favors any of the specific intermediary liability changes described above. However, individual legislators and state officials may freely dedicate as much of their time and energy as they wish to browbeating platforms. Politicians cannot legislate unilaterally, but they can jawbone without broad support. Because platforms make moderation decisions quickly and rarely explain the reasoning behind their decisions, it can be hard to determine when jawboning is successful. At scale, individual posts or users don’t matter much to platforms, or to their bottom lines. Regulatory changes would cost platforms more than any one user might add to revenue. As a result, even unrealistic or vague government threats can prompt platforms to remove speech. Jawboning isn’t limited to legislators and threatened legislation, although legislative threats are often more visible than others. When government presides over antitrust cases at the state and federal levels and doles out fiberoptic subsidies and other such lucrative government contracts, there will always be some lever that officials can threaten to pull. All this makes jawboning the most effective method of internet censorship in America. By working through intermediaries, government can suppress speech quickly, without broad support, and potentially without alerting anyone of its involvement. Normalizing jawboning as a means of censorship threatens free speech in a number of ways.

Constitutional Problems

Jawboning any speech intermediary — whether it is an analog printer or a digital platform — essentially evades the First Amendment’s restrictions on government censorship. The First Amendment prohibits government or public officials from “abridging the freedom of speech, or of the press.” Despite this prohibition, government can prevent the publication of disfavored speech by threatening the intermediaries that carry it. Using threats of prosecution or regulation to compel private speech suppression simply launders state censorship through private intermediaries.

Featured Video

Jawboning often targets particular speakers or categories of speech or speakers. In these cases, this viewpoint discrimination is prohibited by the First Amendment. With the exception of certain unprotected categories of speech, government cannot pick and choose which perspectives are allowed in a given forum.

While jawboning is usually used to censor, government officials can also demand that intermediaries carry speech that the intermediaries object to. Demanding that intermediaries carry particular speech or speakers also entails viewpoint discrimination and might violate prohibitions on compelled speech. In most cases, government cannot demand that a speaker say something they disagree with. The prohibition on compelled speech has generally been extended to publishers and other intermediaries, although common carrier designations and net neutrality require certain categories of intermediaries to carry of all speech on equal terms.19

Because jawboning can allow government officials to assume a power to censor that is prohibited by the Constitution, some courts have found that official demands violate the First Amendment. However, official requests remain lawful government speech. In Bantam Books v. Sullivan, the Supreme Court held that whether government speech is considered to be illegal jawboning hinges on the presence of a threat.20 A Rhode Island state commission’s notice to book distributors warning them of obscene content was deemed jawboning because it included threats of prosecution. The commission could only levy informal sanctions, but its notices included a reminder of the commission’s “duty to recommend to the Attorney General prosecution of purveyors of obscenity.”21 Writing for the majority in Backpage v. Dart, a more recent case about jawboning credit card companies, Judge Posner explains how courts have interpreted Bantam Books to draw lines between lawful government speech and illegal jawboning:

A public official who tries to shut down an avenue of expression of ideas and opinions through “actual or threatened imposition of government power or sanction” is violating the First Amendment.22

Importantly, under the Bantam Books standard, jawboning does not require any actual exercise of state power. The government need not punish anyone — it must merely threaten to punish. However, it must still make some articulable threat. Drawing the line here makes sense given how jawboning works — a threat need not be carried out in order to affect private speech policies. Moreover, it is nearly impossible to determine when a government threat, rather than private conscience, is responsible for an intermediary’s removal of speech. He continues:

The difference between government expression and intimidation — the first permitted by the First Amendment, the latter forbidden by it — is well explained in Okwedy v. Molinari … “the fact that a public-official defendant lacks direct regulatory or decision-making authority over a plaintiff, or a third party that is publishing or otherwise disseminating the plaintiff’s message, is not necessarily dispositive.… What matters is the distinction between attempts to convince and attempts to coerce. A public-official defendant who threatens to employ coercive state power to stifle protected speech violates a plaintiff’s First Amendment rights, regardless of whether the threatened punishment comes in the form of the use (or, misuse) of the defendant’s direct regulatory or decision-making authority over the plaintiff, or in some less-direct form.”23

Within some bounds of reasonableness, it does not matter if the jawboning government official is capable of making good on the threat, merely that the threat is made. However, what exactly constitutes a threat, or renders a request coercive, remains up for debate.

Further muddying the waters, Bantam Books is not the last word from the Supreme Court on jawboning. A competing line of jurisprudence gives government officials far greater leeway to jawbone. In Blum v. Yaretsky, the Supreme Court rejected claims that regulations encouraging the transfer of Medicaid recipients made the government responsible for their discharge from private nursing homes. Writing for the majority, Justice William Rehnquist articulated a standard of state action requiring an exercise of government power, rather than a threat to do so:

A State normally can be held responsible for a private decision only when it has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the State.24

This standard sets a much higher threshold for unconstitutional interference. So long as platforms can make moderation decisions for themselves, a background of government demands and threats can be ignored. Given the importance of free speech to our society and system of government, we should err on the side of protecting it. This means using Bantam Books rather than Blum in jawboning cases. Because content moderation is opaque, it is difficult to identify when moderation decisions are essentially those “of the State,” and therefore in breach of the Blum standard. In contrast, under Bantam Books, threats can constitute jawboning even if they can’t be proven to have caused a moderation decision. Nevertheless, as I detail in the “How Have Courts Responded” section, no matter which standard is used, courts struggle to provide effective remedies for jawboning.

Process Problems

Even in countries without America’s exemplary speech protections, jawboning is a problem. Jawboning is not just censorship, but unaccountable censorship. Because jawboning is informal, users receive no notice that their speech has been removed as a result of political pressure, rather than the private judgement of the platform. Jawboning is free of anything resembling due process: a politician or regulator complains, either in public or privately, and the platform censors, potentially in response to the complaint. Stanford law professor Daphne Keller drives home the difficulty of the platform users’ position in the title of her paper “Who Do You Sue?,” which is about the “often messy blend of government and private power behind many content removals.”25 While European speech restrictions may offend American sensibilities, they are nevertheless bound by law. Government must win its case for censorship in a public court, and as a result may face press scrutiny and public backlash. However, when government censors through informal demands made of private intermediaries, speakers have no real opportunities for appeal. In most cases, users cannot determine which authority, public or private, is truly responsible for the decision to remove their speech. However, to the user, the actions of platform moderators are much more visible than those of the government officials making demands.

When government pressures platforms to censor speech, platforms are blamed for its removal. Government officials can demand that Facebook do more to remove symbols of hate but they face none of the backlash when the platform’s retuned algorithms remove recolored images from World War II.26 In fact, platform acquiescence to jawboning both indicates that platforms can be jawboned, inviting similar demands, and it often prompts calls for regulation to prevent the platform from adopting seemingly jawboned content policies. In either case, platforms are left holding the bag, and platform users suffer under opaque, unaccountable censorship. Politicians face few incentives to refrain from jawboning because they rarely face the blame for jawboned platform moderation.

Opacity and Probabilistic Enforcement

The opacity of social media jawboning sets it apart from more traditional examples. Unlike the singular, binary decision to sever an ongoing commercial relationship, as in Backpage v. Dart or Carlin Communications v. Mountain States, content moderation is happening all the time. Rather than merely make decisions about whether to permit or remove content, platforms may also algorithmically suppress speech and thus limit its audience, or else hide it behind an interstitial warning. These features of platform content moderation make jawboning hard to identify.

Most importantly, content moderation at scale is always imperfect. Rather than perfectly discriminating between wanted and unwanted speech, platform moderators must choose to accept more or fewer false positives. Contemporary social media platforms are huge, hosting hundreds of millions or billions of users. At this scale, it is nearly impossible to enforce platform rules universally or uniformly. Instead, platforms must find an acceptable balance between different kinds of mistakes.

Harvard law lecturer Evelyn Douek calls this paradigm “probabilistic enforcement,” a term that helps to illustrate how moderation can be abused through jawboning:

A probabilistic conception of online speech acknowledges that enforcement of the rules made as a result of this balancing will never be perfect, and so governance systems should take into account the inevitability of error and choose what kinds of errors to prefer.27

If government officials can influence what kinds of errors platforms accept, they can engage in censorship without ever appearing to do so. Because content moderation is never perfect, whether a given piece of content stays up or comes down is a matter of probability. Whether or not a given piece of content breaks a platform’s rules is not always dispositive. In “Probably Speech, Maybe Free: Toward a Probabilistic Understanding of Online Expression and Platform Governance,” USC Annenberg communications professor Mike Ananny gives some examples of the factors that determine the likelihood that some content will be removed:

Platform content moderation is also probabilistic. It is a confluence of likelihoods: did an algorithmic filter trigger a computational threshold to block offensive content, did enough users within a particular period of time flag a sufficient amount of content to cause an account to be suspended, and did third-party content moderators evenly apply platforms’ content standards?28

If government demands can alter how platform moderators view these signals, or where they place thresholds for removal, they can change which speech is removed. Crucially, when government demands shape probabilistic enforcement, its influence is all but impossible to discern from the results.

Imagine a scenario in which a set of 10 posts are flagged for review and platform moderators find that 5 of those posts are in violation of platform rules. In response to officials’ demands that it remove hateful content more quickly, the platform shortens its review time. With less time to review each piece of content, reviewers are less able to appreciate the context of each post. Reviewing a similar set of posts more quickly, moderators find 7 that are in violation. (They could also find fewer that are in violation, or simply find a different set of 5.) However, it is difficult to tie any change in moderation output to the officials’ demand. A demand to remove content more quickly does not implicate particular pieces of content, so from the outside, we cannot know whether, or to what extent, the platform has complied with the demand.

The altered review timeline could be substituted for more-restrictive algorithmic filtering, or a change in user flagging thresholds. What matters is that by making demands of platforms’ moderation process, officials can invisibly shape their output. Process demands can affect a much broader set of content than specific removal requests. Rather than discriminating against particular speakers, changes to the moderation process can introduce new biases platform-wide. As a result, jawboned process changes may be more difficult for users to litigate than more traditional jawboning, which tends to demand the severance of particular commercial relationships. It would be difficult for users to link the suppression or removal of their speech to specific changes to platforms’ moderation processes. Challenges to process jawboning might have to come from platforms protecting their editorial discretion. The difficulty of bringing suit makes process jawboning an even greater threat to free speech. More than merely affecting existing speech, this invisible tailoring affects which speech platforms will host in the future.

The effects of jawboning are similarly difficult to detect when government identifies or complains about speech that is already prohibited by platform rules. Because content moderation is imperfect, a great deal of content prohibited by platform rules is never noticed or removed by moderators.29 If, by using the bully pulpit of their public office, officials draw moderators’ attention to speech that the moderators would have otherwise ignored or missed, the officials have effectively caused the speech to be removed. Yet from the platforms’ perspective, this feature can sometimes be helpful, allowing them to prioritize areas of pressing concern. The line between notification and demand can be very blurry, and it is difficult to determine how platform policies might have been enforced differently in the absence of government prodding.

In both cases, either by influencing the moderation process or by prioritizing the removal of particular prohibited content, government takes advantage of the opacity that surrounds content moderation at scale. Content flagging and changes to the enforcement process occur behind closed doors, so the effects of government influence are all but impossible to detect. By working through an essentially unaccountable process, government censorship can become similarly unaccountable. In the “Process Restriction Demands” section I examine process demands made by senators Feinstein, Leahy, and Markey in greater detail.

Normalizing Jawboning

The contemporary revival of jawboning began in 2017, in the wake of the election of President Donald Trump. Spooked by the seeming improbability of his victory, many Democrats blamed social media. They called for platforms to prevent Russian meddling in American elections and threatened regulation if platforms failed to solve the problem.

It has since become increasingly normal for senators and representatives on both sides of the aisle to make demands of companies. Not every demand is accompanied by a threat, and some demands are more general than others. Yet all these requests occur in the context of debate about how or whether to regulate technology firms. Many of the remedies discussed are explicitly punitive or would upend social media platforms’ business models. Nothing prevents Congress from debating and passing legislation regulating the business practices of social media platforms. There are no First Amendment barriers to general legislation that would result in less speech or fewer speaking opportunities for Americans. However, when this discussion is paired with demands that platforms remove constitutionally protected speech, it should at least raise eyebrows.

When, in a 2017 hearing on Russian interference in American elections, Sen. Dianne Feinstein (D‑CA) told platform representatives, “You’ve created these platforms and now they are being misused, and you have to be the ones to do something about it, or we will,” she didn’t propose immediate legislative action.30 Instead, she cast legislation as a penalty that could be avoided if platforms did “something.” In this case, the something likely to forestall legislation involved limiting Americans’ access to Russian speech and inevitably removing some American speech alongside Russian propaganda. While government may not prevent Americans from receiving foreign speech, social media platforms are not bound by the First Amendment.31 However, platform content moderation is always imperfect. Committing to removing Russian disinformation masquerading as right-wing American speech would mean inevitably removing some right-wing American speech by mistake.

Over the past four years, platforms have largely complied with Feinstein’s demand and Congress has only passed one major bill affecting social media. Whether platforms’ new limits on foreign advertising and disinformation are a result of her demands will remain unknowable. Whether the lack of legislation is the result of perceived acquiescence or partisan gridlock is similarly impossible to discern. Nevertheless, jawboning about the 2016 election contributed to perceptions of bias against conservatives.

Twitter responded to the demand that it “do something” by adopting a policy prohibiting “distributing hacked materials.” The hacked materials policy was adopted alongside rules making it easier to ban troll networks, and both measures were framed as responses to the Russian Internet Research Agency’s use of Twitter. A Twitter year-end review said that the policies “allow us to take more aggressive action against known malicious actors, such as the Russian Internet Research Agency.”32 Although the hacked materials policy was implemented as an “election integrity” measure in the run-up to the 2018 midterms, it initially received little attention from the press.33 In June of 2020, Twitter banned the leak clearinghouse Distributed Denial of Secrets for hosting BlueLeaks, a trove of hacked police files.34 While the incident was covered by technology reporters, it had little broader political salience.

Long ignored, Twitter’s hacked materials policy became the focus of national politics when the platform blocked the sharing of a New York Post story containing materials taken from Hunter Biden’s laptop weeks before the 2020 election. At the time, Twitter had no good way of determining if the laptop’s contents had been leaked or hacked, or if they were even real. Disinformation researcher Clint Watts describes the difficulty of Twitter’s position: “If they didn’t take that down, and it turns out to be a foreign op, and it changes the course of the election, they’re going to be right back testifying in front of Congress, hammered with regulation and fines.”35

Yet in this case, Twitter’s election integrity measure fueled suspicions of bias and gave rise to a “lost cause” narrative in which Trump would have won reelection if the New York Post story had been more widely distributed on Twitter. Being harangued and threatened with regulation is what spurred Twitter to adopt the hacked materials policy in the first place, and its use brought Twitter right back before Congress. In a Senate Judiciary hearing titled “Breaking the News: Censorship, Suppression, and the 2020 Election,” Twitter was browbeaten by both sides. Republicans excoriated the company for exercising undue influence in the election, while Democrats demanded that Twitter do more to suppress claims that the election was stolen.36 By this point, nearly everyone had some demand of the platform.

The creation of, and backlash to, Twitter’s hacked materials policy illustrates both how jawboning puts platforms in a no-win situation and how routine jawboning has become since the 2016 election. In 2017, Feinstein presented her demands as a way to forestall legislation. Four years later, jawboning is used as a replacement for legislation by members on both sides of the aisle.

Tracking Jawboning

To better understand how jawboning is used, I have gathered 62 examples of demands that government officials have made of social media platforms. Most examples are drawn from eight congressional hearings about social media platforms held over the past four years. Hearing transcripts and the full list of jawboning examples are available in Annex A (https://www.cato.org/sites/cato.org/files/2022–09/policy-analysis-934-annex.pdf) and Annex B (https://​info​gram​.com/​a​n​n​e​x​-​b​-​s​o​c​i​a​l​-​m​e​d​i​a​-​h​e​a​r​i​n​g​-​t​i​m​e​l​i​n​e​-​1​h​z​j​4​o​3​o​x​7​l​l​o​4​p​?live).37 Some examples come from media reports about government officials’ speech or from congressional press releases. This set of examples is not a representative sample — an unknown amount of jawboning also occurs in private. I may have included some demands that others would have excluded, or failed to include some speech that others would have deemed jawboning. However, this set of examples includes a wide variety of demands, illustrating the sorts of things that government officials want platforms to do. Not every demand is paired with a threat, but all the demands are made in the course of discussions about potential social media regulation.

Figure 1 shows that within the set, examples are clustered in the latter half of the recorded period. The diversity of demands and demanding officials also increases over time. In 2016 and 2017, a small number of politicians demanded information about platform policies and the removal of Russian disinformation. From 2018 on, more politicians made demands and the scope of their demands grew to encompass almost every aspect of platform speech governance. While this set of demands is not necessarily representative of the broader universe of jawboning, it seems more likely that the use of jawboning has grown rather than simply having moved from private to public venues.

Figure 2 shows that almost all demands were aimed at Facebook, Twitter, or Google, and many were made of all three at once. However, some demands have been made of other platforms, such as Amazon, Netflix, and Squarespace. Their inclusion helps to show that it is not necessarily the social aspects of social media, but its role as a speech intermediary, that makes it a target for jawboning. These other intermediaries’ significance to the publication of books, movies, and websites makes them useful for controlling speech. Once their importance is recognized, intermediaries become political footballs: Amazon has received demands to remove some books it carries and carry others that it has removed from its shelves.

While we may not be able to draw definitive conclusions from this set of examples, it includes a diverse array of demands (see Annex A). In the following section, I examine several demands representative of particular styles or categories of jawboning. These examples provide a cross section of how government officials jawbone — in public, at least — and what they hope to accomplish by doing it.

Nearly all of my collected examples are of public jawboning. Most are public statements and some examples are drawn from letters that were posted for public consumption, but only one, the FBI’s “encouragement” of the removal of ostensibly Iranian-run websites, documents a demand that was made in private.38 Even this example was publicly discussed, albeit opaquely, in a later press conference. Much jawboning undoubtedly occurs in private. It is likely to be characteristically different from public jawboning. Private jawboning may be more specific or employed by government actors without public platforms. In response to my inquiries about nonpublic jawboning, a Facebook employee discussed demands by congressional staffers “that were essentially individual constituent service calls.” It is understandable that, given the opacity of private content moderation, some social media users would turn to their representatives for help.

But layering some amount of unaccountable government power on top of private moderation only improves it for a select few. Influential constituents, or staffers themselves, may also demand the removal of disfavored speech. Past a certain point, it is unhelpful to speculate without concrete examples. However, it is necessary to recognize that jawboning needn’t only occur in public, and that in private it may not be bound by the norms that restrain public speech. Private jawboning is particularly concerning because without public scrutiny government officials may bully intermediaries with nearly complete impunity. The only real check on private jawboning is the willingness of platforms to report the behavior publicly, but platforms face strong incentives to refrain from going public. Feuds with elected representatives tend to be destructive of shareholder value.

Before continuing to the examples, it is important to mention one more caveat. Jawboning in Congress is protected by the Speech or Debate Clause. Article I, Section 6, of our Constitution grants members of Congress certain privileges, including immunity from criminal and civil liability for “and for any Speech or Debate in either House.”39 The Speech or Debate Clause is intended to allow legislators to debate legislation free from any external interference. This doesn’t mean that congressional jawboning isn’t a problem, or that it can’t have a deleterious effect on free speech, but threats made in Congress aren’t legally actionable. However, not every demand made by a member of Congress is protected — the context in which they make their demands matters. I discuss this in greater detail in the “What Can Be Done” section.

Jawboning Styles

Not every instance of social media jawboning fits into the categories described here. However, these examples are typical of contemporary jawboning and illustrate how it has evolved over the past half decade. The chosen categories emphasize how jawboning prompts further jawboning, and how the opacity of platform moderation makes it difficult to appreciate the effects of government bullying.

This is not the only way to categorize jawboning, but it best captures how congressional jawboning has evolved in America. Daphne Keller has created a seven-point continuum of social media jawboning which works very well in the international context.40 When applied to my set of American examples, however, almost all are clustered in the middle of the scale, thus limiting its usefulness. Keller’s insight that jawboning exists on a continuum remains accurate. American rule of law simply precludes harsher forms of jawboning — the police cannot be sent to raid Twitter’s offices in a show of force. More fine-grained categories are needed to understand how American politicians jawbone social media platforms.

Leading Questions

Among the most common, and least harmful, forms of jawboning are leading requests for information. Politicians or government officials will ask platforms how they are addressing some understood problem. Sen. John Thune (R‑SD) provides an early example of this kind of pressure in a letter that he sent to Facebook in May 2016. Thune asked, “How many stories have curators excluded that represented conservative viewpoints or topics of interest to conservatives?,” and “What steps is Facebook taking to investigate claims of politically motivated manipulation of news stories in the Trending Topics section?”41

Sometimes officials will ask platforms to explain decisions they are unhappy with, signaling disapproval and holding platform representatives’ feet to the fire. In a July 2018 hearing, Reps. Ted Deutch (D‑FL) and Jamie Raskin (D‑MD) asked YouTube to explain why it had not removed the Infowars channel. Deutch said, “You recently decided not to ban Infowars. Can you explain that decision?” Later, Raskin asked, “So just explain, what’s happened with Infowars? … Why are they still on Facebook?”42 A month later, YouTube and other platforms permanently suspended Infowars and its host Alex Jones.43 While it is impossible to determine what role the representatives’ questions played in his suspension, their inquiries presume an oversight role where none exists.

It is not the job of Congress to oversee, second guess, or direct the decisions of private intermediaries. Such oversight presumes a role in speech regulation that the Constitution specifically denies Congress. Even when congressmembers are just asking questions, they often ask questions in an effort to prompt private firms to exercise power that is denied to government.

Asking “what steps have you taken to address X content” is not merely a request for information. It is premised upon the assumption that the identified content is a problem that should be addressed. In most cases, “addressed” implies algorithmic deprioritization or removal.

In one particularly egregious example of this sort of just-asking-questions jawboning, Sen. Marsha Blackburn (R‑TN) asked Google CEO Sundar Pichai if an engineer who had criticized her in leaked internal memos was still employed by the company. “He has had very unkind things to say about me and I was just wondering if you all had still kept him working there.”44 While her request was ostensibly just for information, senators have few legitimate reasons to inquire about the employment status of their critics, let alone those revealed to have criticized them in a private workplace email. If the senator’s inquiry prompted Google to fire the engineer, she might be seen has having punished him with unemployment for his critical speech.

Not all requests for information are so leading, and it is entirely legitimate for Congress to attempt to understand how content moderation works. But when the purpose of a request for information is to prompt a change in platforms’ private speech governance, it constitutes jawboning in its most mild form.

Process Restriction Demands

Demands for a more-restrictive process are the most common form of social media jawboning. They can be general, such as Feinstein’s exhortation, “You’ve created these platforms and now they are being misused, and you have to be the ones to do something about it, or we will,” or much more specific, relating to particular platform policies or moderation processes.45

Some process demands make claims on platform moderation resources. Sen. Patrick Leahy (D‑VT) has asked Facebook to ensure the prompt removal of hate speech in Myanmar: “Will you dedicate resources to make sure such hate speech is taken down within 24 hours?”46 Of course, Facebook’s resources aren’t unlimited, so directing more resources to hiring Burmese-language moderators or training AI to appreciate local slang means directing fewer resources to other projects. Platform content moderation is a product. The more its priorities and development are shaped by government demands, the less private it becomes.

Other process demands involve alterations to platform features or policies. Some platform features are seen to have unwanted effects. Politicians have blamed recommendation algorithms for both right-wing radicalization and anti-conservative bias. Partisans often imagine that if platforms were redesigned according to their whims, more people would listen to better speech, leading to favorable political outcomes. Weeks before the 2020 election, Sen. Ed Markey (D‑MA) requested a pause on Facebook group recommendations until after the certification of election results. “Mr. Zuckerberg will you commit to stopping all group recommendations on your platform until US election results are certified? Yes, or no?”47

Markey was concerned that Facebook Groups could be used to quickly build networks to contest the legitimacy of election results. Although his worries were well founded (the Stop the Steal campaign made use of Facebook groups), speech questioning election results is, for better or worse, part of democratic politics.48 By prodding Facebook to remove group recommendations, he attempted to deny his political opponents an avenue for popular mobilization. This is not an action the American government can traditionally take, even given the potential for civic strife that is inherent to contested elections.

While Facebook is free to design its platform as it wishes, certain design arrangements may be seen to benefit one party over the other. Partisans of different stripes are likely to use the platform in different ways, so some disparities will inevitably result from even neutral rules. For instance, liberals use platform reporting features more often than conservatives.49 However, if Facebook alters its platforms’ design in response to political pressure, whatever disparities in outcome these alterations introduce will be the result of government’s thumb on the scale, not private choice. While any legislation that sought to prevent Facebook from recommending groups to its users would likely face unsurmountable constitutional barriers, by leaning on Facebook to cease recommendations, Markey sought to accomplish through speech what he could not achieve via legislation.

In a surprise to both Markey and dedicated Facebook-watchers, Zuckerberg responded with an announcement that Facebook had already halted recommendations for political and social issue groups. Buzzfeed’s confirmation of the policy change — two days later via a Facebook spokesperson — illustrates how difficult it can be to determine how and when platform rules or processes have changed:

Mentioned in passing by CEO Mark Zuckerberg during a Senate hearing on Wednesday, the move was confirmed to BuzzFeed News by a Facebook spokesperson. The company declined to say when exactly it implemented the change or when it would end.50

Platforms change their policies and processes often and with little fanfare. In some cases, moderators refuse to reveal precisely what changes they have made, ostensibly out of concern that bad actors will game the rules.51 But because politicians frequently make demands of platform processes, and because their demands often overlap with concerns that are voiced by civil society and the media, the opacity of platform rulemaking makes it impossible to tell which source of pressure has prompted a change. In this climate, it can be hard to trust that Facebook made a truly private decision to suspend group recommendations.

Must-Carry Demands

While we usually think of social media jawboning as being used to censor speech, it can also be used to compel intermediaries to carry speech they otherwise would not. Sometimes officials will demand that platforms pledge not to remove content from a particular speaker or about a particular subject. Must-carry jawboning also includes demands to restore content that a platform has previously removed.

Must-carry demands are sometimes made in response to perceived process restriction demands from across the aisle. If the other side is understood to have unduly influenced a platform’s content policies, jawboning can be seen as a corrective action. President Donald Trump captured this sentiment in a series of May 2020 tweets threatening social media platforms:

Republicans feel that Social Media Platforms totally silence conservative voices. We will strongly regulate, or close them down, before we can ever allow this to happen. We saw what they attempted to do, and failed, in 2016. We can’t let a more sophisticated version of that happen again.52

Jawboning by the president, be it Trump’s must-carry demands or Biden’s insistence that platforms remove anti-vaccine content, is particularly concerning. The executive has many ways of directly interfering in the business of social media platforms.

Some demands blur the line between must-carry demands and process demands. Alleging bias in some platform’s moderation process, these demands expect the platform to look inward and engage in some form of corrective self-criticism. In a 2018 hearing about Google’s data practices, Rep. Darrell Issa (R‑CA) demanded that Google adopt a disparate impact framework and, assuming that any difference in outcomes is evidence of bias, correct differences in search results:

Will you commit to look in the case of potential political bias in all aspects of your very large company, to look at the outcome, measure the outcome, and see if in fact there is evidence of bias using that, and then work backwards to see if some of that can be evened to what would appropriately be the outcome?53

Must-carry demands can be also paired with specific removal requests. Sen. Cory Gardner (R‑CO) provided one example in a hearing held just before the 2020 election: “So it’s strange to me that you’ve flagged the tweets from the President but haven’t hidden the Ayatollah’s tweets.”54 Sen. Tim Scott (R‑SC) echoed this concern, saying “Tell me why you flag conservatives in America, like president Trump … while allowing dictators to spew their propaganda on your platform.”55

Even though they call for speech to be carried rather than removed, all these demands would supplant platforms’ private judgement with that of elected officials. If platforms are bullied into carrying speech that they wouldn’t otherwise host, the platforms’ retransmission becomes compelled speech. Hosting unwanted speech may offend other users or advertisers, harm a platform’s business, and violate its basic rights of conscience. Must-carry demands are made on behalf of some speakers who violate platform rules but not others, making them procedurally unfair to other users as well as a violation of platforms’ rights.

Counter-Jawboning

One indication that jawboning is on the rise is its recognition and repudiation by members of Congress. While some of those who complain loudly about the other sides’ jawboning have made their fair share of threats and demands, these complaints show that congressmembers are aware of this emerging dynamic.

In her opening statement as ranking minority member in a 2019 Senate Judiciary Subcommittee on the Constitution hearing titled “Stifling Free Speech: Technological Censorship and the Public Discourse,” Sen. Mazie Hirono (D‑HI) said, “We simply cannot allow the Republican party to harass tech companies into weakening content moderation policies that already fail to remove hateful, dangerous, and misleading content.”56 A year later, in a Senate Commerce Committee hearing held just before the 2020 election titled “Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?,” other Democrats echoed her concerns. Sen. Richard Blumenthal (D‑CT) called the hearing an attempt to “bully and browbeat the platforms here to try to tilt them towards President Trump.”57 Sen. Brian Schatz (D‑HI) mixed demands for backbone from platforms with warnings of potential complicity in any post-election strife:

Do not let the United States Senate bully you into carrying water for those who want to advance misinformation, and don’t let the specter of removing Section 230 protections, or an amendment to antitrust law, or any other kinds of threats cause you to be a party to the subversion of our democracy.58

The back-and-forth tussle between removal demands, must-carry demands, and demands to ignore must-carry demands makes platform policy a political football. If platforms attempt to meet these countervailing demands, their policies will become unstable and incoherent. The more they respond to jawboning, the more they will be seen as having been captured by one side or the other, thus inviting “corrective” jawboning and legislation.

Raskin articulated this position in a 2018 hearing, suggesting that if jawboning had prompted YouTube to treat right-wing speakers with leniency, Congress would have to “look into” the matter:

Well, look, I’m with Mr. Lieu, which is that you guys are private journalistic entities right now. But if you’re going to be ideologically badgered and bulldozed to take positions in the interest of right-wing politics, then we are going to have to look at what’s happening there, because at that point there’s not viewpoint neutrality.59

As a matter of law, platforms are not required to be viewpoint neutral. This First Amendment stricture applies only to government. However well-intended, Raskin’s suggestion only fuels the perceptions that spur must-carry demands. If congressmembers are concerned about jawboning, they can propose congressional rules prohibiting members from making demands of private firms, but countering jawboning with more jawboning only makes the problem worse. Because partisans cannot be sure if platforms have actually acted on their demands, it is hard to imagine tit-for-tat jawboning leading to any stable equilibrium.

Letters and Specific Removal Requests: Senator Menendez Bans @IvanTheTroll

Elected officials may wield the power of their office to jawbone via letters or emails as easily as they can verbally. Demands made via letter are more concrete than requests mixed with verbal chastisement. They are also much more clearly separate from congressional debate, making them much less likely to receive the Speech or Debate Clauses’ protections.

On March 7, 2019, Sen. Bob Menendez (D‑NJ) sent a letter to Twitter CEO Jack Dorsey demanding that Twitter remove links to digital blueprints for 3D-printed guns, citing a court order that did not apply to Twitter. Menendez wrote, “I ask that you take immediate steps to remove such links, as well as the ability to directly message these links from your platform.” He claimed that failing to remove digital blueprint links would violate the law. “Given the court order in effect, if foreign users are able to access the website and the blueprints, the publication of these blueprints violates the law. I urge you to take immediate action to remove the publication of the links.”60

The court order referenced by Menendez temporarily prevented the gun-printing collective Defense Distributed from publishing its designs for an entirely 3D-printed gun where foreigners could access them while the court interpreted changes to export-control laws. It did not bind other designers or designs. Menendez alleged that Twitter user @IvanTheTroll had “tweeted his plans to release blueprints for a 3D-printed AR-15 firearm,” singling him out for enforcement. Unlike the novel Liberator handgun designed by Defense Distributed, the AR-15 design has long been in the public domain, so concerns that foreigners might gain access to it are moot.

In any case, Twitter was not hosting the blueprints themselves. Although Menendez asked Twitter to prevent users from sending links to websites hosting firearms files via private messages, the Defense Distributed court order explicitly allows files to be “emailed, mailed, securely transmitted or otherwise published within the United States.”61 It says nothing about publishing links to websites that host other gun blueprints. Thus, much of what Menendez requested of Twitter clearly limits lawful speech. By casting @IvantheTroll’s use of Twitter as illegal, or at least, “nefarious and potentially unlawful,” Menendez presented his removal as a pressing legal necessity.62

On April 12, 2019, Twitter suspended @IvanTheTroll. On April 29, Twitter informed Menendez that @IvantheTroll had been suspended for violating “Twitter’s longstanding policy that ‘prohibits the promotion of weapons …’”63 However, the quoted policy governs advertising, and the policy’s page header reads “This policy applies to Twitter’s paid advertising products.”64 It does not apply to user-submitted content.

When contacted by a journalist from The Trace in May, a Twitter spokesperson contended that “accounts sharing 3D-printed gun designs are in violation of the Twitter Rules’ unlawful use policy.” However, at the time, the Twitter’s unlawful use policy made no mention of printed firearms, simply prohibiting the “use of our service for any unlawful purposes or in furtherance of illegal activities.”65 Beyond Menendez’s sweeping claims, there was no reason to believe that sharing links to firearms blueprints was unlawful.

Twitter did publish a more specific “Illegal or Certain Regulated Goods or Services” policy that prohibits sharing “instructions on making weapons (e.g., bombs, 3D-printed guns, etc.).” The policy is dated April 2019, but the page wasn’t added to the “General Guidelines and Policies” page until June 6, well after both Menendez’s letter and Ivan’s suspension.66

It is hard to get a clear picture from this mess of shifting post hoc justifications and uncertain policy changes. However, some things are certain. Twitter had no published policy concerning 3D-printed guns before receiving Menendez’s letter or at the time it suspended @IvanTheTroll. Shortly after receiving Menendez’s letter, Twitter banned @IvanTheTroll and prohibited sharing links to gun printing files. While there is no clear proof that Menendez’s assertions of unlawful behavior caused Twitter to ban @IvanTheTroll and change its policies, this was not the first time that senators had asked Twitter to do something about 3D-printed guns.

The previous year, Menendez signed on to a letter authored by Feinstein that urged Twitter and other platforms to prohibit the sharing of gun blueprints. However, this earlier letter did not claim that hosting blueprints or links to them were illegal, and it did not single out a particular user for removal.67 Despite being signed by senators Feinstein, Menendez, Blumenthal, Markey, and Bill Nelson (D‑FL), it had little effect on platform policies. Menendez’s solo-authored, but far more threatening, letter was followed by a rapid change in platform policy.

This incident illustrates the difficulty of definitively proving that a platform decision is the result of jawboning. Twitter’s multiple justifications for Ivan’s removal and its evolving policy show how vague or mercurial platform rulemaking and enforcement can obscure government involvement. Twitter’s suspension of @IvanTheTroll can only be linked to the senator’s letter because Menendez singled him out for enforcement. Although Menendez’s more specific, threatening letter was undoubtedly more effective than Feinstein’s softer, more general missive, it is also potentially more actionable.

Unfortunately, even given the specificity of Menendez’s letter, @IvanTheTroll’s suspension wasn’t viewed as an exercise of state power. Writing in Wired, Jake Hanrahan juxtaposed Ivan’s Twitter ban with government action:

His Twitter account was permanently suspended after New Jersey state senator [sic] Bob Menendez lobbied for it to be taken down, but as far as the government and law enforcement goes, things have been mostly quiet.68

“Mostly quiet” is a misnomer: Menendez’s letter was itself very real government action that in all likelihood spurred Twitter to permanently suspend Ivan’s account. While Menendez’s demand was unusual in its specificity, the incident helps to show that even when demands are very specific, the intermediation of digital speech makes it hard to prove government involvement.

Allegations of Illegality

Government officials often justify specific removal requests with allegations of illegality. Illegal speech is almost always prohibited by platform policies, so by presenting disfavored speech as potentially illegal, politicians can prompt its removal by platforms. While Section 230 protects platforms from liability for most user speech, it does not apply to violations of federal criminal law. Thus, in some narrow cases, platforms could face liability for failing to remove abjectly illegal speech. As a result, politicians are incentivized to exaggerate the likelihood that speech they dislike would be found to be illegal by a court. It also places their demands on more legitimate rhetorical footing: it would be wrong for a politician to demand the suppression of constitutionally protected speech but prodding platforms to remove dangerous or illegal speech is viewed more positively by the press and public.

On a November 5, 2020, episode of his War Room Pandemic livestream, Steve Bannon exhorted Trump to fire National Institute of Allergy and Infectious Diseases director Dr. Anthony Fauci and FBI director Chris Wray before presenting a violent fantasy of what he would do if he were in charge:

I’d actually like to go back to the old times of Tudor England. I’d put their heads on pikes, right, I’d put them at the two corners of the White House as a warning to federal bureaucrats, you either get with the program or you’re gone.69

Bannon’s medieval fantasies may be bloodthirsty and offensive, but they are neither illegal nor a call for violence. To some, however, Bannon’s shock-jock puffery was no laughing matter. In a hearing a few days later, Blumenthal asked Mark Zuckerberg, “How many times is Steve Bannon allowed to call for the murder of government officials before Facebook suspends him?”70 Yet Steve Bannon did not call for the murder of Fauci, certainly not in any immediately threatening or actionable sense.71 Blumenthal incorrectly asserted that Bannon’s speech was outside the bounds of the First Amendment’s protections, saying, “what we’ve seen here are fighting words and hate speech that certainly deserve no free expression protection.” Mark Zuckerberg explained that while Facebook removed the video, it did not remove accounts for a first use of violent rhetoric. Nevertheless, Blumenthal persisted, asking, “Will you commit to taking down that account, Steve Bannon’s account?” When Zuckerberg refused again, Blumenthal announced, after harping on Facebook’s irresponsibility, “I believe that decisive action is necessary, including very likely breaking up Facebook as a remedy.”72 When presenting Bannon’s speech as unlawful failed to sway Zuckerberg, Blumenthal resorted to threatening antitrust action. In this case, at least, an explicit jawboning attempt failed to prompt platform action.

How Have Courts Responded?

Analog Cases

Jawboning is not unique to social media or the internet. Courts have adjudicated pre-internet jawboning claims in Bantam Books v. Sullivan, Blum v. Yaretsky, and Carlin Communications Inc. v. Mountain States Telephone and Telegraph. In these cases, courts developed remedies to curb censorial government demands. However, it remains to be seen if jurisprudence that was created to prevent the bullying of analog intermediaries can stop social media jawboning.

As I discussed in the “Constitutional Problems” section, while courts have agreed that government may not coerce private actors into depriving others of their rights, the Supreme Court has set two quite different standards for prohibited coercion.

In Bantam Books, the Court treated threatened government action as sufficiently coercive to render the threatened party’s subsequent decisions state action. In Blum, however, the Court set a much higher bar, finding government action only when a private choice must be “deemed to be that of the State.”73

However, Blum was not about government demands. Plaintiffs sought to hold a private nursing home provider liable for its reaction to state regulations that were intended to set standards of care and limit costs. If anything, Blum would be more analogous to lawsuits that treat platforms as state actors because of their reactions to legislation. I am unaware of any such litigation, but recent changes to intermediary liability laws could prompt such a claim. The Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act (known as SESTA-FOSTA) exposed platforms to great liability for hosting prostitution-related speech. Platforms responded by removing such speech. Litigants might argue that SESTA-FOSTA’s specific impositions of liability transform platforms into state actors, but, like the Medicare regulations at issue in Blum, this would be far from a standard jawboning claim.

Because plaintiffs in Blum tried to find state action in private reactions to legislation, the Blum decision is friendlier to more traditional jawboning claims than some lower courts have appreciated:

Respondents … argue that the State “affirmatively commands” the summary discharge or transfer of Medicaid patients who are thought to be inappropriately placed in their nursing facilities. Were this characterization accurate, we would have a different question before us. However, our review of the statutes and regulations identified by respondents does not support respondents’ characterization of them.74

In the view of the Blum Court, the statutes and regulations governing Medicaid patient care and reimbursement levels were not commands, informal or otherwise. Blum still allowed courts to find state action in affirmative commands, but drawing the line here still permits some jawboning prohibited under Bantam Books. Government threats may prompt private action even when they aren’t tied to particular commands. Bantam Books prohibited such unaccompanied threats but Blum did not. Although Bantam Books was specifically about speech intermediaries, and Blum dealt with the limits of state action more generally, lower courts have applied both to cases involving speech.

While Bantam Books and Blum provide the two prevailing standards for determining the constitutionality of jawboning, Carlin Communications illustrates the difficulty of providing an effective judicial remedy for jawboning, regardless of which standard is used. Further decisions holding Bantam Books as controlling in social media jawboning cases would ensure that courts appreciate the coercive power of merely threatened government action. However, it wouldn’t necessarily provide a remedy capable of rectifying the chilling effects of jawboning social media platforms.

In its 1987 Carlin Communications decision, the 9th Circuit found illegal jawboning, but it couldn’t offer a lasting remedy. The court held that by threatening telephone provider Mountain Bell with prosecution for hosting a sex line run by Carlin Communications, a Maricopa County deputy attorney rendered Mountain Bell’s subsequent removal of the sex line a government action that violated the First Amendment. The court ordered Mountain Bell to reconnect Carlin Communications but allowed Mountain Bell to implement a policy prohibiting adult services.

The deputy attorney advised Mountain Bell to terminate Carlin’s service and threatened to prosecute Mountain Bell if it did not comply. With this threat, Arizona “exercised coercive power” over Mountain Bell and thereby converted its otherwise private conduct into state action.75

Citing both Blum and Bantam Books, the Carlin Communications court found that the Maricopa County attorney’s written demand and threat were in and of themselves government action sufficient to meet either standard.

However, the court’s ability to provide Carlin with a lasting remedy was limited by Mountain Bell’s First Amendment rights. Although Mountain Bell’s initial decision to disconnect Carlin was deemed that of the state, one jawboned decision didn’t permanently render Mountain Bell a state actor:

Thus, the initial termination of Carlin’s service was unconstitutional state action. It does not follow, however, that Mountain Bell may never thereafter decide independently to exclude Carlin’s messages from its 976 network. It only follows that the state may never induce Mountain Bell to do so.76

A permanent designation would have many undesirable consequences. Firstly, it would run roughshod over the First Amendment Rights of Mountain Bell, limiting its ability to choose what services it carries and essentially punishing it for having been jawboned. It would also turn jawboning into a roundabout mechanism for imposing must-carry requirements on intermediaries. If platforms were prevented from removing any content that had been the subject of a censorious government demand, efforts to remedy jawboning might birth bad-faith efforts to find government action to reverse decisions of private conscience.

The Carlin Communications court felt that reconnection and condemnation of the government’s threats appropriately reset the scales of Mountain Bell’s private judgement. Their ruling removed the immediate weight of government threats but left Mountain States free to immediately re-exclude Carlin under a new policy. However, given the often-subtle ways in which government power can be used to punish, this offers less than a complete solution to the jawboning problem:

Mountain Bell insists that its new policy reflected its independent business judgment. Carlin argues that Mountain Bell was continuing to yield to state threats of prosecution. However, the factual question of Mountain Bell’s true motivations is immaterial. This is true because, inasmuch as the state under the facts before us may not coerce or otherwise induce Mountain Bell to deprive Carlin of its communication channel, Mountain Bell is now free to once again extend its 976 service to Carlin.77

Government may wait to act or might find less direct ways of making good on a threat. This is more of a concern when the government actor’s powers are expansive and diverse, such as those of the executive, or on the local level, where formal and informal power is often mixed. However, since courts must respect the First Amendment rights of platforms and cannot look into the minds of their executives, this might be the best they can do. At the very least, one-off reconnection must be paired with ongoing judicial vigilance and a long institutional memory. Courts cannot prevent intermediaries from falling out of political or public favor, but they can prevent or curtail some manifestations of bias.

Even applying the limited remedy in Carlin Communications — reconnection — to jawboned social media content moderation will prove difficult. As detailed in the examples, content moderation is a constant and usually opaque process. Algorithmic priorities are ever shifting, and the effects of changes to the content moderation process cannot be easily felt by users. Some narrow categories of jawboning, such as specific account removals, as in the @IvanTheTroll example, might be both identifiable and redressable through reconnection. However, reconnection can’t address most social media jawboning, particularly wide-reaching changes to policy and process.

Digital Cases

Backpage v. Dart

If full restoration of platform access is the goal of litigation, even successful lawsuits against internet jawboning offer few solutions. The 7th Circuit’s Backpage v. Dart decision offers the most contemporary rebuke of government jawboning, but its facts differ from those of most social media jawboning examples in a number of important ways. These differences illustrate why jawboning that affects social media moderation is particularly difficult for courts to halt or remedy. While early digital jawboning targeted individual commercial relationships, social media jawboning seeks to alter the output of a process.

Back​page​.com hosted classified advertisements, including a personals section that frequently included advertisements for prostitution. Sheriff Tom Dart of Cook County, Illinois, attempted to shut Backpage down by sending a threatening letter to its payment processors, Visa and Mastercard. Visa and Mastercard responded by refusing to process payments to Backpage. Backpage sued Sheriff Dart, seeking an injunction to stop his pressure campaign. The case was initially dismissed at the district level, where the court did not apply Bantam Books’ holding that mere threats are coercive. The 7th Circuit reversed, holding that Sheriff Dart’s threatening letter violated Backpage’s First Amendment Rights. It ordered Dart to “take no actions, formal or informal, to coerce or threaten” firms providing Backpage with financial services and to inform Visa and Mastercard of the decision.78

One of Backpage v. Dart’s most important contributions to jawboning jurisprudence is Judge Posner’s explanation of how ostensible requests can be coercive when made by government officials. Quoting the district court’s opinion at length, Posner illustrates how, in light of Bantam Books holding, Sheriff Dart’s requests make implicit demands of Visa and MasterCard:

Dart did not directly threaten the companies with an investigation or prosecution, and he admits that his department had no authority to take any official action with respect to Visa and MasterCard. But by writing in his official capacity, requesting a “cease and desist,” invoking the legal obligations of financial institutions to cooperate with law enforcement, and requiring ongoing contact with the companies, among other things, Dart could reasonably be seen as implying that the companies would face some government sanction — specifically, investigation and prosecution — if they did not comply with his “request.”79

If the implications of a request or notification are serious enough, they may compel action just as readily as commands. This is particularly true of accusations of illegality or potential legal liability. Posner goes on to explain how the scale of payment processing, and the resultant relative unimportance of any individual client, incentivizes banks’ acquiescence to state demands. Massive social media platforms face similar incentives when asked to remove particular accounts or pieces of content:

It might seem that large companies such as Visa and MasterCard would not knuckle under to a sheriff, even the sheriff of a very populous county. That might be true if they derived a very large part of their income from the company that he wanted them to boycott. But they don’t.… Yet the potential cost to the credit card companies of criminal or civil liability and of negative press had the companies ignored Sheriff Dart’s threats may well have been very high, which would explain their knuckling under to the threats with such alacrity.80

It is important that courts appreciate this incentive when weighing the effects of official demands. Any attempts to identify and remedy jawboning must work around intermediaries’ reticence about the subject. Because social media platforms face accusations that they are swayed by jawboning, they are unlikely to admit that government browbeating has altered their speech governance.81

While Judge Posner ordered Sheriff Dart to inform Visa and Mastercard of the decision, they were not parties to the lawsuit, and were not in any way bound by the court’s decision. For reasons ultimately known only to them, Visa and Mastercard declined to restore service to Backpage after the 7th Circuit enjoined Dart’s threats.82 Perhaps they feared bad publicity. In any case, while the Dart decision artfully explains the varied harms of jawboning, it did not restore the relationship between Backpage and its payment processors to the pre-threat status quo. It could not realistically do so — Backpage didn’t sue its payment processors for reinstatement, and like Mountain Bell, Visa and Mastercard have their own First Amendment rights. However, the ineffectiveness of Dart’s remedy further illustrates the difficulty of rectifying the effects of jawboning. The effects of social media jawboning are likely to be even harder to dispel.

The orderly, binary commercial relationships at issue in both the analog Carlin Communications case and the digital Dart case are more legible, and therefore governable, than the constant, opaque process of contemporary content moderation. This only goes to show how much more difficult it is to prevent or redress social media jawboning through the courts. It is much harder to identify the effects of algorithmic deprioritization or changes to the moderation process than it is to identify decisions to stop providing a service. While Carlin Communications and Backpage had strong commercial incentives to seek redress, most users would find it unaffordable to bring suit — particularly when judicial remedies are so limited or temporary. Indeed, while it might be possible to discern if government priorities continue to influence binary decisions by Mountain Bell, Mastercard, or Visa, determining if jawboning continues to sway content moderation would be much more difficult.

Recent lawsuits specifically addressing social media jawboning have thus far been unsuccessful, but the reasons for their dismissal illustrate how such claims might be argued more effectively.

AAPS v. Schiff

In March 2019, Rep. Adam Schiff (D‑CA) sent Amazon and other social media platforms a letter containing leading information requests. Schiff wanted to know what Amazon does “to address misinformation related to vaccines” on its platforms.83 Shortly thereafter, Amazon removed videos from a vaccine-skeptical doctors’ organization called the Association of American Physicians and Surgeons.84 Other platforms added disclaimers to links to the group’s website. While the AAPS was not mentioned in the letter, the organization sued Schiff, alleging that his inquiry and later statements in Congress about amending Section 230 had pushed platforms to suppress their content.

The United States District Court for the District of Columbia held that AAPS lacked standing to sue Schiff. They failed to present an injury, but even if they had, nothing about the platforms’ actions could be clearly linked to Schiff’s inquiries:

Plaintiffs cannot satisfy the causation element of standing because all the alleged harms stem from the actions of parties not before the Court, not from Congressman Schiff. Plaintiffs’ case depends on an analytical leap based on bald speculation rather than allegations of fact. The open letters and public statements made by Congressman Schiff do not mention AAPS, do not advocate for any specific actions, and do not contain any threatening language.85

Further complicating attempts to remedy jawboning through litigation, the court held that even if Schiff’s letters had included the three missing elements — a specific mention of the later-removed content, a specific demand, and a threat — his actions would still have been protected by the U.S. Constitution’s Speech or Debate Clause.

The Speech or Debate Clause absolutely protects “a Member’s conduct at legislative committee hearings,” rendering Schiff’s alleged threats concerning Section 230 nonlitigable. Despite the plaintiffs’ claims that “Congressman Schiff sent the information gathering letters with a ‘non-legislative purpose,’” they failed to explain why the letters should be seen as something other than an information gathering exercise. As a result, the court found that Schiff’s letter falls within the traditionally protected information-gathering function of Congress.

This aspect of the decision might not be so cut-and-dried. Congressional information gathering, via letters or others means, is protected by the Speech or Debate Clause. Gathering information is a necessary part of legislating. However, in Hutchinson v. Proxmire, the Supreme Court held that letters and press releases not “essential to the deliberations of the Senate” fell outside the clause’s protection.86

The late Wisconsin Senator William Proxmire issued a monthly Golden Fleece award to those he deemed responsible for wasting taxpayer money. In 1976, Proxmire awarded a Golden Fleece to government-funded primate aggression research conducted by Ronald Hutchinson. Hutchinson sued, alleging that Proxmire had libeled him in press releases and newsletters announcing the award. Hutchinson also alleged that Proxmire’s aide had contacted the government agencies with which he worked to dissuade them from offering him further funding. While Proxmire’s award was clearly intended to persuade as well as inform, this allegation adds (an often overlooked) explicit jawboning dimension to the case.

The case reached the Supreme Court, which held that “individual Members’ transmittal of information about their activities by press releases and newsletters is not part of the legislative function or the deliberations that make up the legislative process,” and is not protected by the Speech or Debate Clause.87 Citing Doe v. McMillan, which relied on Gravel, the Court also found that the republication of actionable congressional speech was similarly unprotected. “A Member of Congress may not with impunity publish a libel from the speaker’s stand in his home district.”88

The Association of American Physicians and Surgeons’ complaint did not single out Schiff’s press release, and both his letter and press release were relatively nonspecific and made legislation-related requests for information. However, the press releases accompanying more-demanding or less artfully phrased leading requests for information might receive less protection in light of Hutchinson.

The AAPS opinion concludes by questioning why the social media platforms alleged to have behaved as state actors are not included in the suit, noting that any effective remedy would limit their moderation as well as Schiff’s speech. The court found that any injury suffered by the plaintiffs was unlikely to be redressed by muzzling Schiff, making plaintiffs’ claims against him moot.

Thus, AAPS v. Schiff set a high, though not an insurmountable, bar for attempts to redress jawboning through litigation. In order to succeed where the AAPS failed, other litigants would likely need to build a case around a specific removal request accompanied by a threat, made without a clear relation to legislating, and name both the jawboning official and the acquiescing platforms as defendants in the suit.

A few of the selected examples meet this threshold. Most notably, Menendez’s letter to Twitter about @IvanTheTroll includes all the necessary elements of jawboning and requests action rather than information. Many other demands would meet these criteria if they had been made via letters, rather than in congressional hearings. Unfortunately, complaints about process jawboning, which rarely targets particular pieces of content, cannot meet the threshold of specificity set by AAPS v. Schiff.

What Can Be Done

Remedial Litigation and Transparency

Given the lack of successful lawsuits against social media jawboning, perhaps even a few limited rulings would do a lot to alter platforms’ incentives. They might at least provide platforms with a justification for refusing government demands. “We don’t want this decision to be deemed state action” might not be used often, but it could be a useful arrow to add to platforms’ quivers.

The Carlin Communications court employed this reasoning when it argued that because of its holding that “the state under the facts before us may not coerce or otherwise induce Mountain Bell to deprive Carlin of its communication channel,” the court’s decision “substantially immunizes Mountain Bell from state pressure.”89

While in some cases this may hold true, the plaintiffs in Carlin Communications and Backpage v. Dart could detect the effects of jawboning much more easily than most social media users. Setting aside the problem of jawboned changes to the moderation process, even in most content-specific jawboning cases users have no way of recognizing or proving that their speech was suppressed in response to government requests. Writing in Lawfare, University of Chicago law professor Genevieve Lakier explains this enduring, elementary problem, and suggests greater transparency as a potential solution:

Neither the private intermediary nor the government officials will ordinarily have much motivation to acknowledge when jawboning occurs. People whose speech has been suppressed will therefore not know that they can challenge that suppression on constitutional grounds. Elucidating the rules that apply in jawboning cases thus can do only so much to prevent the private exercise of government power when it comes to online speech, absent much more robust transparency about the reasons why platforms take down or otherwise discriminate against individual speech acts or speakers.90

If platform content moderation were more transparent or explicable, aberrant decisions would be easier to spot. Greater transparency about platforms’ moderation systems and internal policies might also help to address, or at least reveal, the effects of process restriction demands. Unfortunately, content moderation’s opacity is largely a function of its scale, making useful transparency impractical.

Many legislative transparency requirement proposals would mandate and standardize the year-end reports that are already published by many platforms.91 Yet year-end transparency reports offer little insight into particular content moderation decisions. Because they do not explain why particular decisions were made, such reports are less likely to reveal signs of political pressure to remove disfavored speech. While some platforms include the number of government removal requests that they received in their transparency reports, this statistic captures official legal requests, not informal jawboning. Platforms are unlikely to unilaterally disclose how often they have been jawboned for the same reasons that they rarely draw attention to specific informal government demands.

To curb jawboning, or at least make it easier to identify, platforms would have to provide transparency around specific decisions. This would involve offering users more insight into the moderation process, explaining how their content was selected for moderation, and which aspects of the user’s content violated which platform rules. The scale of contemporary social media platforms makes this all but impossible. Platforms make thousands of moderation decisions every minute, relying on a combination of algorithms, small armies of contractors, and higher-level specialist review. Platforms already struggle to explain how these Rube Goldbergesque processes arrive at particular decisions. Expecting platforms to offer explanations for moderation decisions that are specific enough to identify aberrant government influence seems fantastic. Requiring them to do so would likely violate First Amendment protections of editorial privilege. In theory, transparency requirements could limit jawboning, but they cannot be practically implemented in an effective manner.

Any effective judicial remedy for government bullying will have to involve platforms. In order to reverse the effects of successful jawboning, courts must either reverse jawboned platform decisions or enjoin jawboned platform policies. In Mountain Bell, reinstatement alone offered only a limited remedy. Mountain Bell’s post-decision prohibition of adult services still bore a whiff of government pressure. In Backpage v. Dart, ordering an end to Sheriff Dart’s harassment did not prompt Visa and Mastercard to restore service to Backpage. Courts can prohibit and even punish jawboning but they may not be able to dispel the lasting effects of official threats. As a result, social media platforms risk becoming collateral damage in any effective post hoc remedy.

Nor can litigation provide an effective remedy to congressional jawboning. While some very explicit instances of congressional jawboning via letter may be actionable, most are not. Everything said in the course of congressional debate, no matter how threatening or transactional, is strictly protected by the Speech or Debate Clause. As long as congressional letters or other speech outside of congress may be plausibly related to gathering information, they receive similar protection.

Overall, because government threats are difficult to concretely address or dispel, it is better to discourage them from being made rather than to expect courts to discern and correct their effects. Litigation offers only a limited after-the-fact solution to government bullying.

Congressional Rules and Norms

While the Speech or Debate Clause precludes holding members of Congress civilly or criminally liable for jawboning, the Constitution gives Congress wide latitude to set rules for its members:

Each House may determine the Rules of its Proceedings, punish its Members for disorderly Behaviour, and, with the Concurrence of two thirds, expel a Member.92

These rules were historically informal, but in the latter half of the 20th century, congressional ethics committees began to promulgate codes of official conduct. These codes govern representatives’ behavior in often exacting detail, stipulating how fundraising is to be conducted, how members should treat their staff, and when the identities of whistleblowers may be disclosed.93 Members who are found to have violated congressional rules can be punished in a number of ways, including “expulsion, censure, reprimand and ‘Letters of Reproval’ and ‘Letters of Admonition.’”94 Even an investigation for potential ethics violations serves as a form of censure, casting a pall over the potentially offending member. Twenty-five representatives have left office after having been investigated for, but not yet convicted of, violating congressional ethics rules.95

Just as Congress prohibits its members from mingling personal and campaign funds, hiring their relatives, or speaking out of turn, it could also prohibit members from jawboning. Congress might make a rule prohibiting members from “demanding the private suppression or deprioritization of protected speech,” or something similar. While representatives may be loath to give up their newfound informal mechanism of speech control, members on both sides of the aisle have complained about the other party’s jawboning. Members might find a universal prohibition more palatable than an escalating arms race.

There is also good reason to see congressional rules as the constitutionally appropriate response to abuses of members’ Speech or Debate Clause privileges. The clause does not merely preclude external punishment of congressional speech — it leaves congressional speech for Congress to police. Congress has wide latitude to set rules for its members precisely because they are so effectively shielded from external oversight. Instead of attempting to chip away at the Speech or Debate Clause, efforts to prevent jawboning should look to its natural complement — congressional rulemaking.

The final check on jawboning by Congressmembers or other elected officials is the voter. While Congress sets rules for its members, American citizens vote to select, or elect, the members of Congress. We the people are ultimately responsible for overseeing our government. This includes both its official actions and the informal power wielded by our elected representatives. Unlike precedent-bound courts, voters’ ability to act on informal signals, or “know it when they see it,” allows them to respond to even subtle political pressure. If voters disdain speech governance via jawboning, they are empowered to punish politicians who attempt to circumvent their rights.

Of course, this requires voters to understand the prevalence and pernicious effects of jawboning. They may view the current jawboned status quo of content moderation as a purely private phenomenon, perhaps one best rectified by more jawboning. Voters may dislike jawboning in the abstract, but support the demands made by their representatives. This concern comports with findings that most Americans support liberal content moderation in the abstract but favor the removal of speech they find personally offensive.96 However, Americans also want more control over what they see on social media. It is likely that the more American voters understand how jawboning is used to shape the rules governing what they can see and say online, the more they will oppose it. Education is an important first step. American voters retain a unique power to restrain jawboning, whether they employ it or not.

Conclusion

Jawboning is not a new threat to free speech, but social media jawboning is both more common and more difficult to combat than the pressure faced by intermediaries in the past. In response to a deluge of cheap speech and a paucity of legal censorship tools, politicians on both sides of the aisle increasingly jawbone in attempts to suppress disfavored speech. This bullying threatens both the editorial rights of platforms and the speech rights of their users, but the opacity and scale of platform content moderation efforts makes it difficult to identify jawboned decisions. Legal precedent requires jawboning to be specific and accompanied by a clear threat. This was easier to prove in analog jawboning cases, where demands targeted specific commercial relationships rather than an ongoing platform moderation process. The most visible social media jawboning takes place in Congress, but the Speech or Debate Clause shields congressional bullying from judicial censure. All this makes it difficult to address social media jawboning in court. Some specific removal requests or must-carry demands made outside of Congress or by unelected officials are actionable, but the vast majority of contemporary jawboning is not.

Instead of the courts, solutions to jawboning must come from platforms, voters, and Congress itself. Platforms should be more willing to disclose that they have received removal requests. While social media platforms have good reasons to fear the negative publicity and browbeating that might attend disclosure, the partisan anxieties that fuel tit-for-tat jawboning may rally unexpected support for jawboned platforms. These anxieties — fears that the other party may be more effective in its jawboning — might encourage congressional rulemaking to restrain jawboning. Although congressional jawboning is shielded from criminal or civil sanction, Congress may set and enforce its own rules for members’ conduct.

The final, and potentially most effective check on jawboning, is the American voter. Members of Congress are directly elected to their positions by their constituents. If the American people are truly displeased with their representatives’ emerging penchant for dictating content moderation standards from the House floor, they may replace them with less meddlesome or more-speech-respecting representatives.

Citation

Duffield, Will. “Jawboning against Speech: How Government Bullying Shapes the Rules of Social Media,” Policy Analysis no. 934, Cato Institute, Washington, DC, September 12, 2022.