As Gillespie notes early in Custodians, “moderation is, in many ways, the commodity that platforms offer.” Social media firms provide speech platforms bundled with rules, or community standards, intended to provide a pleasant user experience. It can be difficult to get these rules right, “too little curation, and users may leave to avoid the toxic environment that has taken hold; too much moderation, and users may still go, rejecting the platform as either too intrusive or too antiseptic.” Decisions not to moderate are, necessarily, moderation decisions. Platforms are not, and have never been, neutral with respect to content, though this does not imply that they have partisan political biases.
Taking a problem-centric approach, Gillespie works his way through social media governance crises of the past decade, illustrating how controversy attending potentially objectionable content, from pro-anorexia “thinspiration” posts to breastfeeding photos, has driven the promulgation of new rules. He employs these stories to sketch out the web of private regulations governing our online interactions.
After introducing content moderation and its necessity, Custodians comes into its own in chapter four, as Gillespie lays out the sheer scale of the task facing platforms on a daily basis. He quotes Del Harvey of Twitter: “Given the scale that Twitter is at, a one-in-a-million chance happens 500 times a day … say 99.999 percent of tweets pose no risk to anyone … that tiny percentage of tweets remaining works out to 150,000 per month.” With millions of users posting dozens of times a day (or 2.23 billion users in Facebook’s case), the volume of speech on social media defies any traditional editorial comparison. With volume comes diversity. Few platforms draw their users from any single community, and most are international, allowing users with different norms and standards of offense to interact with one another. Prescreening this deluge of expression is impossible, so apart from some algorithmic filtering of very specific sorts of content, like child pornography, content moderation is a post hoc game of whack-a-mole.
While the differing architectures and intended uses of particular social media platforms lend themselves to different rules and styles of moderation, platforms’ rulesets have steadily converged because these firms “structurally inhabit the same position — between many of the same competing aims, the same competing users, and between their user bases and lawmakers concerned with their behavior.” Nevertheless, from top-down platforms like Facebook and YouTube to the federated structures of Reddit and Twitch, there is still a great deal of diversity in platform governance. More niche competitors like Full30 or Vimeo distinguish themselves by diverging from YouTube’s policies concerning, respectively, firearms and nudity.
For those concerned with the private rather than public nature of platform governance, convergence can be problematic. While the emergence of some shared best practices may be harmless, American platforms’ near uniform adoption of European hate-speech prohibitions, especially in the wake of regulatory saber rattling by the European Commission, implies that something other than consumer-centric, market-driven decisionmaking is at work. Although much ink has been spilled about the private biases of social media firms, the ability of governments to launder censorship demands through ostensibly private content-moderation processes, evading constitutional limits on state authority, is far more concerning.
Custodians narrowly sidesteps technological determinism, instead highlighting often unappreciated differences between the infrastructures undergirding our physical and digital worlds to explain the use, or overuse, of certain technologies in content moderation. Noting that storefront retailers “can verify a buyer’s age only because the buyer can offer a driver’s license … an institutional mechanism that, for its own reasons, is deeply invested in reliable age verification,” Gillespie explains that without the means to utilize this infrastructure, and in the face of increasing pressure to serve age-appropriate content, internet platforms have doubled down on algorithmic age verification. Platforms may not be able to tell legitimate licenses from fakes, but they can deploy detailed behavioral data to glean the age of their users.
Sometimes this “moderation by design” feels like an improvement over its physical antecedents. Finding that your Twitter account has been suspended is unpleasant, but it is probably less unpleasant than being physically hauled off your soapbox. In other examples, the move from physical to digital access controls seem disconcerting. It is one thing for adult magazines to be placed out of reach of children, it is another for them to be literally invisible to children (or anyone identified as a child based on an algorithmic assessment of their browsing habits).
Geo-blocking, preventing users in certain locations from seeing certain pieces of content, raises similar concerns. However, in the face of demands from competing, intractable groups, it is often the easiest, if not the most principled, solution. When Pakistan demanded that Facebook remove an “Everybody Draw Muhammed Day” page, the platform’s options seemed binary: Facebook could either cave to the demand, effectively allowing Pakistan to alter its hate speech standards, or hold firm and risk losing access to Pakistani customers. Instead, Facebook chose to make the page inaccessible only to Pakistani users. This was not a politically costless decision, but Facebook received far less backlash than it might have had it taken the page down globally, or not at all. While technological half-measures have offered platforms some respite from demands made of their moderation efforts, they cannot resolve tensions inherent to intercommunal, value-laden questions. In many ways, the features that have made the platform internet so valuable to its users — the ability to transcend the fetters of identity or place, and speak, cheaply and instantaneously, to a mass audience — make universally acceptable moderation difficult, if not impossible.
As the sophistication of moderation efforts has increased, efforts to influence the moderation process have grown evermore complex, both on and off platform. This is a trend Custodians could have explored in greater detail. While Gillespie touches on manipulation efforts when describing the human labor required to moderate content, it is discussed mostly in the context of platforms’ attempts to involve users in the moderation process. However, the frequency and quality of efforts to game the system have increased independently of platforms’ efforts to expand user agency.
Instead of simply demanding takedowns in exchange for market access, states have begun to demand global content removals. Meanwhile, 4Chan users have attempted to make use of FOSTA, ostensibly an anti-sex trafficking bill, to get autonomous sensory meridian response (ASMR) recording makers banned from PayPal, alleging that the artists are engaged in sex work. (Listeners of ASMR recordings, which are often just people whispering, get a tingling, sensual sensation.) Advocacy groups and think tanks campaign for community standards changes intended to drive disfavored groups from the platform internet, and states have even placed agents at social media firms to spy on dissidents. Members of Congress regularly invite their favorite content creators to air their complaints about the moderation process on Capitol Hill. It is becoming evermore difficult for social media firms to moderate their walled gardens with the independence that legitimacy requires.
While Gillespie explains in exacting detail the process of moderation and the problems moderators face, he concludes the book without offering much in the way of solutions. He calls for greater transparency from platforms and increased appreciation of the gravity and second-order effects of moderation. These are limited, valuable suggestions. Though Gillespie has published a more solution-centric addendum online that suggests legislative remedies, I cannot help but prefer the original ending. Given his unerring appreciation of moderation’s complexity throughout the book, the endorsement of blunt legislative “fixes” rings hollow. Yet, at this adolescent point in the internet’s history, there is great value in a book that makes the process of content moderation more legible, approachable, and understandable. Here, Custodians of the Internet is an unqualified success. Whether you like the current crop of social media platforms or hate them, no book will better equip you to appraise their actions.
Will Duffield
Cato Institute