Game streaming platform Twitch has announced new rules to govern off-platform conduct, establishing a universal prohibition on a limited set of malicious activities. Twitch has long maintained a “Hateful Conduct and Harassment” policy governing use of the platform; however, the new policy will cover speech on other websites and behavior in the real world. This is a significant expansion of Twitch’s commitment to moderation, and the platform’s reach risks exceeding its grasp. Twitch COO Sara Clemens describes the policy as an industry first. “No other service that we are aware of has undertaken an off-service policy to date. But we felt like it was a necessary next step in addressing online harms.” The list of prohibited activities is as follows:

  • Deadly violence and violent extremism
  • Terrorist activities or recruiting
  • Explicit and/​or credible threats of mass violence (i.e. threats against a group of people, event, or location where people would gather).
  • Leadership or membership in a known hate group
  • Carrying out or deliberately acting as an accomplice to non-consensual sexual activities and/​or sexual assault
  • Sexual exploitation of children, such as child grooming and solicitation/​distribution of underage sexual materials
  • Actions that would directly and explicitly compromise the physical safety of the Twitch community
  • Explicit and/​or credible threats against Twitch, including Twitch staff

Some of the named behaviors may be easier to prove or adjudicate than others. Several rely on often contested definitions. Is lawful self-defense nevertheless deadly violence? Who decides which groups are hate groups? Platforms have long restricted access on the basis of some off-platform behavior. No one wants terrorists on their platform. However, moderation in response to off-platform activity has traditionally been limited and often informal, or responsive to public wrongdoing.



Concertedly policing off-platform behavior is unnecessary and unmanageable. Most problematic off-platform behavior spills over into on-platform behavior, where it is covered by traditional content policies. To the extent that nonpublic problematic behavior unrelated to platform use presents some real threat, it is probably a concern best investigated by law enforcement. Platforms already struggle to supervise behavior within their services, where they enjoy a panoptic view of user activity. Perfect record keeping fails to provide reliable insight into intent and context. Off-platform, moderators’ ability to investigate complaints and accusations is much more limited and costly.



In seeming recognition of this problem, Twitch will “leverage third-party legal experts” to investigate off-platform behavior. Is an unnamed, Twitch-retained law firm really more legitimate than Twitch’s moderators? It will certainly be expensive, potentially increasing the significance of decisions about which cases to pursue in the first place. Twitch will “consider law enforcement action” as input when making decisions. This may screen spurious claims, but outsources Twitch’s judgement to law enforcement bodies of varying quality around the world. There is no ideal set of tradeoffs here. While Twitch includes a number of exceptions, covering “cases where these behaviors have occurred in the distant past,” and “users have gone through a trusted rehabilitation process, such as legally mandated time served in a correctional facility,” particular uses of these exemptions risk being seen as cover for favoritism, content moderation’s celebrity rehab.

Any increase in investigative ability will increase the number of off-platforms activities Twitch, or their lawyers, will be expected to judge. While Clemens anticipates expansion, saying that “the list of off-service behaviors that could result in a Twitch ban is likely to expand as the company learns more,” this will swell an already excessive commitment.

Expanding moderation off-platform increases the universe of incidents which Twitch might be called to judge, and the number of difficult or politically contentious questions with it. While the U.S. military has used Twitch for public relations and recruiting, the platform might be asked to decide if a DoD tweet of a B‑2 bomber photo captioned “The last thing #Millennials will see if they attempt the #area51raid today” constitutes an “explicit and/​or credible threat of mass violence.” What should they make of another B‑2 bomber photo, captioned “#TimesSquare tradition rings in the #NewYear by dropping the big ball…if ever needed, we are #ready to drop something much, much bigger.” Platforms develop rules in response to novel conflicts and controversies. By committing to govern its users behavior everywhere, Twitch faces a larger universe of controversies which might force it to develop new rules.

From a competitive standpoint, such policies can have the effect of increasing the costs of exit to platforms with fewer rules, rendering the use of more liberal outlets an all-or-nothing proposal. Twitch has long prohibited the incitement of violence, even in jest. When, distraught during a 2018 Chapo Trap House midterm election Twitch stream, host Matt Christman drunkenly exhorted viewers to “kill yourself and everyone around you,” he was quickly removed from the stream to avoid a channel ban. Although Chapo’s audience was likely to understand the comment as unserious and potentially amusing, it ran afoul of Twitch’s rules. They knew to keep wild ranting to other forums. Under Twitch’s new policy, however, such speech anywhere, even to a self-selected, paying audience, could be grounds for a Twitch ban.

While these policies are justified by concerns about platform user safety, and Twitch cannot impose punishments beyond Twitch account suspensions, they nevertheless create a capacity, on the margin, to restrain, rather than retroactively punish, off platform behavior. While the threat of a Twitch ban won’t dissuade a terrorist, it may dissuade someone from joining a hate group, or at least publicly announcing their affiliation. This represents a departure from inward-facing conceptions of platform governance, and looks more like law. Twitch cannot impose costs beyond the suspension of Twitch accounts, but has claimed universal jurisdiction.

The scale of contemporary social media platforms gives their judgments an inherently fickle quality. Even to the extent that their rules create a soft, or non-legal check of aberrant behavior, the unreliability of their justice makes them poor intermediary institutions.

Moderation actions across multiple conglomerated platforms, such as Facebook’s linking Facebook account actions to Oculus account actions, stokes antitrust complaints. Cross-platform compacts raise questions about the diversity of the ecosystem. Looking off-platform for more dragons to slay will create more problems than it solves.

While the old informal processes of policing particularly egregious off-platform behavior had transparency issues, these will not necessarily be solved by the creation of a formal policy, especially if administered from a black box. One-off decisions could simply have been better explained (almost a year after the fact, Twitch has yet to offer an explanation for the sudden suspension of popular streamer Dr. Disrespect, fueling rumors of everything from contract disputes to coded David Icke references). Indeed, while journalist Casey Newton likens the policy to Twitch hiring “a detectives’ bureau to run down the worst behaviors of its creators,” Twitch could have ordered investigations of allegations of abuse by prominent streamers without the creation of a formal policy. What seems to matter most here is that Twitch has formally committed to, and created a process, however opaque for, governing the off-platform behavior of its users. Time will tell if Twitch can meet the new expectations it has set for itself.