Swisher’s impatient demand for fast action seemed to assume that the solutions to social media’s ills were obvious. I tweeted in reply, asking what “fix” she wanted to implement so quickly. There was no answer.
Here is the diagnosis I would offer. What is “broken” about social media is exactly the same thing that makes it useful, attractive, and commercially successful: it is incredibly effective at facilitating discoveries and exchanges of information among interested parties at unprecedented scale. As a direct result of that, there are more informational interactions than ever before and more mutual exchanges between people. This human activity, in all its glory, gore, and squalor, generates storable, searchable records, and its users leave attributable tracks everywhere. As noted before, the emerging new world of social media is marked by hypertransparency.
From the standpoint of free expression and free markets there is nothing inherently broken about this; on the contrary, most of the critics are unhappy precisely because the model is working: it is unleashing all kinds of expression and exchanges, and making tons of money at it to boot. But two distinct sociopolitical pathologies are generated by this. The first is that, by exposing all kinds of deplorable uses and users, it tends to funnel outrage at these manifestations of social deviance toward the platform providers. A man discovers pedophiles commenting on YouTube videos of children and is sputtering with rage at … YouTube.28 The second pathology is the idea that the objectionable behaviors can be engineered out of existence or that society as a whole can be engineered into a state of virtue by encouraging intermediaries to adopt stricter surveillance and regulation. Instead of trying to stop or control the objectionable behavior, we strive to control the communications intermediary that was used by the bad actor. Instead of eliminating the crime, we propose to deputize the intermediary to recognize symbols of the crime and erase them from view. It’s as though we assume that life is a screen, and if we remove unwanted things from our screens by controlling internet intermediaries, then we have solved life’s problems. (And even as we do this, we hypocritically complain about China and its alleged development of an all-embracing social credit system based on online interactions.)
The reaction against social media is thus based on a false premise and a false promise. The false premise is that the creators of tools that enable public interaction at scale are primarily responsible for the existence of the behaviors and messages so revealed. The false promise is that by pushing the platform providers to block content, eliminate accounts, or otherwise attack manifestations of social problems on their platforms, we are solving or reducing those problems. Combing these misapprehensions, we’ve tried to curb “new” problems by hiding them from public view.
The major platforms have contributed to this pathology by taking on ever-more-extensive content-moderation duties. Because of the intense political pressure they are under, the dominant platforms are rapidly accepting the idea that they have overarching social responsibilities to shape user morals and shape public discourse in politically acceptable ways. Inevitably, due to the scale of social media interactions, this means increasingly automated or algorithmic forms of regulation, with all of its rigidities, stupidities, and errors. But it also means massive investments in labor-intensive manual forms of moderation.29
The policy debate on this topic is complicated by the fact that internet intermediaries cannot really avoid taking on some optional content regulation responsibilities beyond complying with various laws. Their status as multisided markets that match providers and seekers of information requires it.30 Recommendations based on machine learning guide users through the vast, otherwise intractable amount of material available. These filters vastly improve the value of a platform to a user, but they also indirectly shape what people see, read, and hear. They can also, as part of their attempts to attract users and enhance the platforms’ value to advertisers, discourage or suppress messages and forms of behavior that make their platforms unpleasant or harmful places. This form of content moderation is outside the scope of the First Amendment’s legal protections because it is executed by a private actor and falls within the scope of editorial discretion.
What’s the Fix?
Section 230 of the Communications Decency Act squared this circle by immunizing information service providers who did nothing to restrict or censor the communications of the parties using their platforms (the classical “neutral conduit” or common-carrier concept), while also immunizing information service providers who assumed some editorial responsibilities (e.g., to restrict pornography and other forms of undesirable content). Intermediaries who did nothing were (supposed to be) immunized in ways that promoted freedom of expression and diversity online; intermediaries who were more active in managing user-generated content were immunized to enhance their ability to delete or otherwise monitor “bad” content without being classified as publishers and thus assuming responsibility for the content they did not restrict.31
It is clear that this legal balancing act, which worked so well to make the modern social media platform successful, is breaking down. Section 230 is a victim of its own success. Platforms have become big and successful in part because of their Section 230 freedoms, but as a result they are subject to political and normative pressures that confer upon them de facto responsibility for what their users read, see, and do. The threat of government intervention is either lurking in the background or being realized in certain jurisdictions. Fueled by hypertransparency, political and normative pressures are making the pure, neutral, nondiscriminatory platform a thing of the past.
The most common proposals for fixing social media platforms all seem to ask the platforms to engage in more content moderation and to ferret out unacceptable forms of expression or behavior. The political demand for more-aggressive content moderation comes primarily from a wide variety of groups seeking to suppress specific kinds of content that is objectionable to them. Those who want less control or more toleration suffer from the diffuse costs/concentrated benefit problem familiar to us from the economic analysis of special interest groups: that is, toleration benefits everyone a little and its presence is barely noticeable until it is lost; suppression, on the other hand, offers powerful and immediate satisfaction to a few highly motivated actors.32
At best, reformers propose to rationalize content moderation in ways designed to make its standards clearer, make their application more consistent, and make an appeals process possible.33 Yet this is unlikely to work unless platforms get the backbone to strongly assert their rights to set the criteria, stick to them, and stop constantly adjusting them based on the vagaries of daily political pressures. At worst, advocates of more content moderation are motivated by a belief that greater content control will reflect their own personal values and priorities. But since calls for tougher or more extensive content moderation come from all ideological and cultural directions, this expectation is unrealistic. It will only lead to a distributed form of the heckler’s veto, and a complete absence of predictable, relatively objective standards. It is not uncommon for outrage at social media to lead in contradictory directions. A reporter for The Guardian, for example, is outraged that Facebook has an ad-targeting category for “vaccine controversies” and flogs the company for allowing anti-vaccination advocates to form closed groups that can reinforce those members’ resistance to mainstream medical care.34 However, there is no way for Facebook to intervene without profiling their users as part of a specific political movement deemed to be wrong, and then suppressing their communications and their ability to associate based on that data. So, at the same time Facebook is widely attacked for privacy violations, it is also being asked to leverage its private user data to flag political and social beliefs that are deemed aberrant and to suppress users’ ability to associate, connect with advertisers, or communicate among themselves. In this combination of surveillance and suppression, what could possibly go wrong?
What stance should advocates of both free expression and free markets take with respect to social media?
First, there needs to be a clearer articulation of the tremendous value of platforms based on their ability to match seekers and providers of information. There also needs to be explicit advocacy for greater tolerance of the jarring diversity revealed by these processes. True liberals need to make it clear that social media platforms cannot be expected to bear the main responsibility for sheltering us from ideas, people, messages, and cultures that we consider wrong or that offend us. Most of the responsibility for what we see and what we avoid should lie with us. If we are outraged by seeing things we don’t like in online communities comprised of billions of people, we need to stop misdirecting that outrage against the platforms that happen to expose us to it. Likewise, if the exposed behavior is illegal, we need to focus on identifying the perpetrators and holding them accountable. As a corollary of this attitudinal change, we also need to show that the hypertransparency fostered by social media can have great social value. As a simple example of this, research has shown that the much-maligned rise of platforms matching female sex workers with clients is statistically correlated with a decrease in violence against women — precisely because it took sex work off the street and made transactions more visible and controllable.35
Second, free-expression supporters need to actively challenge those who want content moderation to go further. We need to expose the fact that they are using social media as a means of reforming and reshaping society, wielding it like a hammer against norms and values they want to be eradicated from the world. These viewpoints are leading us down an authoritarian blind alley. They may very well succeed in suppressing and crippling the freedom of digital media, but they will not, and cannot, succeed in improving society. Instead, they will make social media platforms battlegrounds for a perpetual intensifying conflict over who gets to silence whom. This is already abundantly clear from the cries of discrimination and bias as the platforms ratchet up content moderation: the cries come from both the left and the right in response to moderation that is often experienced as arbitrary.
Finally, we need to mount a renewed and reinvigorated defense of Section 230. The case for Section 230 is simple: no alternative promises to be intrinsically better than what we have now, and most alternatives are likely to be worse. The exaggerations generated by the moral panic have obscured the simple fact that moderating content on a global platform with billions of users is an extraordinarily difficult and demanding task. Users, not platforms, are the source of messages, videos, and images that people find objectionable, so calls for regulation ignore the fact that regulations don’t govern a single supplier, but must govern millions, and maybe billions, of users. The task of flagging user-generated content, considering it, and deciding what to do about it is difficult and expensive. And is best left to the platforms.
However, regulation seems to be coming. Facebook CEO Mark Zuckerberg has published a blog post calling for regulating the internet, and the UK government has released a white paper, “Online Harms,” that proposes the imposition of systematic liability for user-generated content on all internet intermediaries (including hosting companies and internet service providers).36
At best, a system of content regulation influenced by government is going to look very much like what is happening now. Government-mandated standards for content moderation would inevitably put most of the responsibility for censorship on the platforms themselves. Even in China, with its army of censors, the operationalization of censorship relies heavily on the platform operators. In the tsunami of content unleashed by social media, prior restraint by the state is not really an option. Germany responded in a similar fashion with the 2017 Netzwerkdurchsetzungsgesetz, or Network Enforcement Act (popularly known as NetzDG or the Facebook Act), a law aimed at combating agitation, hate speech, and fake news in social networks.
The NetzDG law immediately resulted in suppression of various forms of politically controversial online speech. Joachim Steinhöfel, a German lawyer concerned by Facebook’s essentially jurisprudential role under NetzDG, created a “wall of shame” containing legal content suppressed by NetzDG.37 Ironically, German right-wing nationalists who suffered takedowns under the new law turned the law to their advantage by using it to suppress critical or demeaning comments about themselves. “Germany’s attempt to regulate speech online has seemingly amplified the voices it was trying to diminish,” claims an article in The Atlantic.38 As a result of one right-wing politician’s petition, Facebook must ensure that individuals in Germany cannot use a VPN to access illegal content. Yet still, a report by an anti-hate-speech group that supports the law argues that it has been ineffective. “There have been no fines imposed on companies and little change in overall takedown rates.”39
Abandoning intermediary immunities would make the platforms even more conservative and more prone to disable accounts or take down content than they are now. In terms of costs and legal risks, it will make sense for them to err on the safe side. When intermediaries are given legal responsibility, conflicts about arbitrariness and false positives don’t go away, they intensify. In authoritarian countries, platforms will be merely be indirect implementers of national censorship standards and laws.
On the other hand, U.S. politicians face a unique and interesting dilemma. If they think they can capitalize on social media’s travails with calls for regulation, they must understand that governmental involvement in content regulation would have to conform to the First Amendment. This would mean that all kinds of content that many users don’t want to see, ranging from hate speech to various levels of nudity, could no longer be restricted because they are not strictly illegal. Any government interventions that took down postings or deleted accounts could be litigated based on a First Amendment standard. Ironically, then, a governmental takeover of content regulation responsibilities in the United States would have to be far more liberal than the status quo. Avoidance of this outcome was precisely why Section 230 was passed in the first place.
From a pure free-expression standpoint, a First Amendment approach would be a good thing. But from a free-association and free-market standpoint, it would not. Such a policy would literally force all social media users to be exposed to things they didn’t want to be exposed to. It would undermine the economic value of platforms by decapitating their ability to manage their matching algorithms, shape their environment, and optimize the tradeoffs of a multisided market. Given the current hue and cry about all the bad things people are seeing and doing on social media, a legally driven, permissive First Amendment standard does not seem like it would make anyone happy.
Advocates of expressive freedom, therefore, need to reassert the importance of Section 230. Platforms, not the state, should be responsible for finding the optimal balance between content moderation, freedom of expression, and the economic value of platforms. The alternative of greater government regulation would absolve the platforms of market responsibility for their decisions. It would eliminate competition among platforms for appropriate moderation standards and practices and would probably lead them to exclude and suppress even more legal speech than they do now.
Conclusion
Content regulation is only the most prominent of the issues faced by social media platforms today; they are also implicated in privacy and competition-policy controversies. But social media content regulation has been the exclusive focus of this analysis. Hypertransparency and the subsequent demand for content control it creates are the key drivers of the new media moral panic. The panic is feeding upon itself, creating conditions for policy reactions that overlook or openly challenge values regarding free expression and free enterprise. While there is a lot to dislike about Facebook and other social media platforms, it’s time we realized that a great deal of that negative reaction stems from an information society contemplating manifestations of itself. It is not an exaggeration to say that we are blaming the mirror for what we see in it. Section 230 is still surprisingly relevant to this dilemma. As a policy, Section 230 was not a form of infant industry protection that we can dispense with now, nor was it a product of a utopian inebriation with the potential of the internet. It was a very clever way of distributing responsibility for content governance in social media. If we stick with this arrangement, learn more tolerance, and take more responsibility for what we see and do on social media, we can respond to the problems while retaining the benefits.
Notes
1 Milton L. Mueller, “Hyper-transparency and Social Control: Social Media as Magnets for Regulation,” Telecommunications Policy 39, no. 9 (2015): 804–10.
2 Erich Goode and Nachman Ben-Yehuda, “Grounding and Defending the Sociology of Moral Panic,” chap. 2 in Moral Panic and the Politics of Anxiety, ed. Sean Patrick Hier (Abingdon: Routledge, 2011).
3 Stanley Cohen, Folk Devils and Moral Panics (Abingdon: Routledge, 2011).
4 Ronald J. Deibert, “The Road to Digital Unfreedom: Three Painful Truths about Social Media,” Journal of Democracy 30, no. 1 (2019): 25–39.
5 Zeynep Tufekci, “YouTube, the Great Radicalizer,” New York Times, March 10, 2018.
6 Tufekci, “YouTube, the Great Radicalizer.”
7 Roger McNamee, “I Mentored Mark Zuckerberg. I Loved Facebook. But I Can’t Stay Silent about What’s Happening,” Time Magazine, January 17, 2019.
8 Jonathan Albright, “Untrue-Tube: Monetizing Misery and Disinformation,” Medium, February 25, 2018.
9 Courtney Seiter, “The Psychology of Social Media: Why We Like, Comment, and Share Online,” Buffer, August 20, 2017.
10 Paul Mozur, “A Genocide Incited on Facebook, With Posts from Myanmar’s Military,” New York Times, October 15, 2018.
11 Ingrid Burrington, “Could Facebook Be Tried for Human-Rights Abuses?,” The Atlantic, December 20, 2017.
12 Burrington, “Could Facebook Be Tried for Human-Rights Abuses?”
13 For a discussion of Michael Flynn’s lobbying campaign for the Turkish government and Paul Manafort’s business in Ukraine and Russia, see Rebecca Kheel, “Turkey and Michael Flynn: Five Things to Know,” The Hill, December 17, 2018; and Franklin Foer, “Paul Manafort, American Hustler,” The Atlantic, March 2018.
14 See, for example, “Minority Views to the Majority-produced ‘Report on Russian Active Measures, March 22, 2018’” of the Democratic representatives from the United States House Permanent Select Committee on Intelligence (USHPSCI), March 26, 2018.
15 Indictment at 11, U.S. v. Viktor Borisovich Netyksho et al., Case 1:18-cr-00032-DLF (D.D.C. filed Feb. 16, 2018).
16 Matt Taibbi, “Can We Be Saved from Facebook?,” Rolling Stone, April 3, 2018.
17 Peter W. Singer and Emerson T. Brooking, LikeWar: The Weaponization of Social Media (New York: Houghton Mifflin Harcourt, 2018).
18 Thomas Rid, “Why Twitter Is the Best Social Media Platform for Disinformation,” Motherboard, November 1, 2017.
19 McNamee, “I Mentored Mark Zuckerberg. I Loved Facebook. But I Can’t Stay Silent about What’s Happening.”
20 Hunt Allcott and Matthew Gentzkow, “Social Media and Fake News in the 2016 Election,” Journal of Economic Perspectives 31, no. 2 (2017): 211–36.
21 Sarah McKune, “An Analysis of the International Code of Conduct for Information Security,” CitizenLab, September 28, 2015.
22 Kirsten Drotner, “Dangerous Media? Panic Discourses and Dilemmas of Modernity,” Paedagogica Historica 35, no. 3 (1999): 593–619.
23 Thomas W. Hazlett, “The Rationality of US Regulation of the Broadcast Spectrum,” Journal of Law and Economics 33, no. 1 (1990): 133–75.
24 Robert McChesney, Telecommunications, Mass Media and Democracy: The Battle for Control of U.S. Broadcasting, 1928–1935 (New York: Oxford, 1995).
25 Fredric Wertham, Seduction of the Innocent (New York: Rinehart, 1954); and David Hajdu, The Ten-cent Plague: The Great Comic-book Scare and How It Changed America (New York: Picador, 2009), https://us.macmillan.com/books/9780312428235.
26 “Like drug dealers on the corner, [TV broadcasters] control the life of the neighborhood, the home and, increasingly, the lives of children in their custody,” claimed a former FCC commissioner. Minow & LeMay, 1995. http://www.washingtonpost.com/wp-srv/style/longterm/books/chap1/abandonedinthewasteland.htm. Newton N. Minow & Craig L. LaMay, Abandoned in the Wasteland (New York: Hill and Wang, 1996)
27 Kara Swisher (@karaswisher), “Overall here is my mood and I think a lot of people when it comes to fixing what is broke about social media and tech: Why aren’t you moving faster? Why aren’t you moving faster? Why aren’t you moving faster?” Twitter post, February 12, 2019, 2:03 p.m., https://twitter.com/karaswisher/status/1095443416148787202.
28 Matt Watson, “Youtube Is Facilitating the Sexual Exploitation of Children, and It’s Being Monetized,” YouTube video, 20:47, “MattsWhatItIs,” February 27, 2019, https://www.youtube.com/watch?v=O13G5A5w5P0.
29 Casey Newton, “The Trauma Floor: The Secret Lives of Facebook Moderators in America,” The Verge, February 25, 2019.
30 Geoff Parker, Marshall van Alstyne, and Sangeet Choudhary, Platform Revolution (New York: W. W. Norton, 2016).
31 The Court in Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997), said Sec. 230 was passed to “remove the disincentives to self-regulation created by the Stratton Oakmont decision.” In Stratton Oakmont, Inc. v. Prodigy Services Co., (N.Y. Sup. Ct. 1995), a bulletin-board provider was held responsible for defamatory remarks by one of its customers because it made efforts to edit some of the posted content.
32 Robert D Tollison, “Rent Seeking: A Survey,” Kyklos 35, no. 4 (1982): 575–602.
33 See, for example, the “Santa Clara Principles on Transparency and Accountability in Content Moderation,” May 8, 2018, https://santaclaraprinciples.org/.
34 Julia Carrie Wong, “Revealed: Facebook Enables Ads to Target Users Interested in ‘Vaccine Controversies’,” The Guardian (London), February 15, 2019.
35 See Scott Cunningham, Gregory DeAngelo, and John Tripp, “Craigslist’s Effect on Violence against Women,” http://scunning.com/craigslist110.pdf (2017). See also Emily Witt, “After the Closure of Backpage, Increasingly Vulnerable Sex Workers Are Demanding Their Rights,” New Yorker, June 8, 2018.
36 Mark Zuckerberg, “Four Ideas to Regulate the Internet,” March 30, 2019; and UK Home Office, Department for Digital, Culture, Media & Sport, Online Harms White Paper, The Rt Hon. Sajid Javid MP, The Rt Hon. Jeremy Wright MP, April 8, 2019.
37 Joachim Nikolaus Steinhöfel, “Blocks & Hate Speech–Insane Censorship & Arbitrariness from FB,” Facebook Block — Wall of Shame, https://facebook-sperre.steinhoefel.de/.
38 Linda Kinstler, “Germany’s Attempt to Fix Facebook Is Backfiring,” The Atlantic, May 18, 2018.
39 William Echikson and Olivia Knodt, “Germany’s NetzDG: A Key Test for Combatting Online Hate,” CEPS Research Report no. 2018/09, November 2018.