In the early days of the Internet, citing concerns about pedophiles and hackers, parents would worry about their children’s engagement on unfamiliar platforms. Now, those same parents have Facebook accounts and get their news from Twitter. However, one look at a newspaper shows op-eds aplenty castigating the platforms that host an ever-growing share of our social lives. Even after more than a decade of social media use, prominent politicians and individuals who lack critical awareness of the realities and limitations of social media platforms choose to scapegoat platforms—rather than people—for a litany of social problems. Hate speech on Facebook? Well, it’s obviously Facebook’s fault. Fake news? Obviously created by Twitter.


But, what if these political concerns are misplaced? In a new Cato Policy Analysis, Georgia Tech’s Milton Mueller argues that that the moral panic attending social media is misplaced. Mueller contends that social media “makes human interactions hypertransparent,” rendering hitherto unseen social and commercial interactions visible and public. This newfound transparency “displace[s] the responsibility for societal acts from the perpetrators to the platform that makes them visible.” Individuals do wrong, platforms are condemned. This makes no political nor moral sense. Social media platforms are blamed for a diverse array of social ills, ranging from hate speech and addiction to mob violence and terrorism. In the wake of the 2016 U.S. presidential election, foreign electoral interference and the spread of misinformation were also laid at their feet. However, these woes are not new, and the form and tenor of concerns about social media misuse increasingly resembles a classic moral panic. Instead of appreciating that social media has revealed misconduct previously ignored, tolerated, or swept under the rug, social media is too often understood as the root cause of these perennial problems. 


People behaved immorally long before the advent of social media platforms and will continue to do so long after the current platforms are replaced by something else. Mueller argues that today’s misplaced moral panic is “based on a false premise and a false promise.” The false premise? Social media controls human behavior. The false promise? Imposing new rules on intermediaries will solve what are essentially human problems.


Mueller’s examination of Facebook’s role in hosting genocidal Burmese propaganda drives this point home. When Burmese authorities began using Facebook to encourage violence against the country’s Muslim Rohingya minority, Facebook was slow to act— it had few employees who could read Burmese, rendering the identification of offending messages difficult. However, Facebook has since been blamed for massacres of Rohingya carried out by Myanmar’s military and Buddhist extremists. While Facebook provided a forum for this propaganda, it cannot be seen as having caused violence that was prompted and supported by state authorities. We should be glad that Facebook could subsequently prevent the use of its platform by Myanmar’s generals, but we cannot expect Facebook to singlehandedly stop a sovereign state from pursuing a policy of mass murder. Myanmar’s government, not Facebook, is responsible for its messaging and the conduct of its armed forces.


Mueller shows that technologies enhancing transparency get blamed for the problems they reveal. The advent of the printing press, radio, television, and even inexpensive comic books were all followed by moral panics and calls for regulation. The ensuing regulation caused unforeseen harm. Mueller finds that “the federal takeover of the airwaves led to a systemic exclusion of diverse voices,” while recent German social media regulation “immediately resulted in suppression of various forms of politically controversial online speech.” Acting on the false premise that social media is responsible for grievances expressed through it, regulation intended to stamp out hate merely addresses its visible symptoms.


Contra these traditional, counterproductive responses, Mueller advocates greater personal responsibility; if we do not like what we see on social media we should remember that it is the speech of our fellow users, not that of the platforms themselves. He also urges resistance to government attempts to regulate social media, either by directly regulating speech, or by altering intermediary liability regimes to encourage more restrictive private governance. Proceeding from the false premise that a “broken” social media is responsible for the ills it reveals, regulation will simply suppress speech. Little will be gained, and much may be lost.