In 1996 Congress passed Section 230, which protects online services from being held liable for the vast majority of content posted by users. The law emerged in the wake of court decisions in the early 1990s that were sending the growing internet industry conflicting rulings about whether internet providers and websites could be held liable for third-party content. Two members of Congress, Rep. Ron Wyden (D‑OR) and Rep. Chris Cox (R‑CA), wrote Section 230 to ensure that websites were free to moderate content as they saw fit without running the risk of being considered the publishers of third-party content.
This agreement is a small part of a growing bipartisan insistence that Big Tech is broken and should be fixed. Yet the proposed fixes risk undermining security, privacy, and the freedom of association while growing the size and scope of government.
Many Republicans and Democrats are unhappy about the state of Section 230, but their complaints have different roots. Republican lawmakers allege that Big Tech companies such as Facebook, Google, and Twitter use their content moderation policies as weapons in an ongoing anticonservative crusade. Among the allegations are claims that Google employees tweak the algorithms governing the famous Google search function to hide conservative content. Another allegation, made famous by the conservative commentator Dennis Prager, is that Google-owned YouTube disproportionately limits access to conservative-oriented content. Conservative activists have also raised concerns about so-called shadow banning — a term used to describe a social media site hiding certain content from users.
Activists on the left have had their complaints about social media content moderation, too, but more recently their concerns are focused on — among other things — white supremacist radicalization. According to some Democratic lawmakers and activists, Big Tech firms have not done enough to tackle this issue. Mass shootings at home and abroad have heightened worries about young men finding themselves in disturbing online rabbit holes that glorify violence and political extremism.
Although elements of the political left and right have been criticizing Section 230, there is a bipartisan group of lawmakers who see its value. Indeed, two members of the House, a Democrat and a Republican, worked together to draft Section 230 in 1996. Yet a bipartisan group of Section 230 critics could prevail over those who view the law as an essential feature of an internet conducive to the free flow of ideas.
In early December, President Trump vetoed the annual National Defense Authorization Act because it didn’t include a repeal of Section 230, among other things. The month prior, tech CEOs, including Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey, were hauled before Congress for contentious hearings both before and shortly after the 2020 election and were excoriated by both Republicans and Democrats for their content moderation policies. It appears that Section 230 will continue to be a target of the White House.
ARE BIG TECH COMPANIES MONOPOLIES?
At first glance, it might seem as if the arguments of Big Tech critics rest on a solid foundation. After all, you would have to be naïve to deny the cultural dominance of Big Tech firms. Google is not just a noun; it is also a verb. If Facebook were a country, it would be the most populous on Earth. While not as popular as Facebook, Twitter is practically a required feature of any modern, successful career in journalism or politics. Amazon’s success has made its founder, Jeff Bezos, the wealthiest person in the world. The growth of these firms has allowed them to develop into more diverse companies. Amazon, which first began as an online book retail company, now makes original TV shows and home devices. Google restructured a few years ago and became a subsidiary of the conglomerate Alphabet, which oversees a vast array of research and product development in everything from drones, driverless cars, and artificial intelligence to health care technology.
Many lawmakers and regulators looking at this landscape have raised concerns about free speech. It seems as if powerful market incumbents operate a modern-day public square. In addition, activists, academics, policy professionals, and lawmakers have criticized Big Tech for turning a blind eye to extremism and child abuse. The image of a handful of monopolies determining how and when we can speak, all while facilitating abuse and the fostering of violent extremism, is a frightening one. Fortunately, it rests on crucial misunderstandings.
Among the most important errors often heard in Big Tech debates is the assumption that Google, Facebook, YouTube, and others are monopolies. But this assumption reveals a confusion about how these companies work. Far from being monopolies, the companies are competitors, both with each other and with smaller upstarts.
Users of the most famous Big Tech services pay nothing to the firms. Your Facebook account does not cost you anything, nor do your Google searches. You can visit YouTube to watch videos for free without even setting up an account. This is possible because these firms use advertising to generate the majority of their revenue. Individuals reveal very little of interest to advertisers, but in a large group, individuals using a social networking platform such as Facebook and search tools such as Google reveal information that is very valuable to advertisers. Although Google is the world’s most famous search engine and Facebook dominates social media, neither sells search services or social networking. Rather, they sell information that their free services provide.
Despite being competitors, Big Tech companies are regularly described as “monopolies,” and the Department of Justice announced earlier last year that it would be pursuing an antitrust investigation into Google.
Google has been a particularly common target of bipartisan attacks in recent years, being one of the few companies in the world that is a household name in discussions about monopoly and content moderation. Google owns YouTube, the world’s most popular video sharing website. YouTube’s content has provided plenty of ammunition to both the right and the left in their respective fights against Big Tech.
PragerU, an online conservative education resource founded by the conservative commentator Dennis Prager, filed an unsuccessful lawsuit against YouTube claiming violation of its First Amendment rights. The U.S. Court of Appeals for the Ninth Circuit made short work of PragerU’s argument, noting that “the Free Speech Clause of the First Amendment prohibits the government — not a private party — from abridging speech.” But not only was PragerU’s case legally weak, it also revealed a misplaced sense of persecution.
YouTube puts some videos into Restricted Mode, a setting users can enable (and which only 1.5 percent of users select). In Restricted Mode, visitors cannot access content that YouTube considers mature. Such content can include depictions or discussions of violence, drug use, sexual situations, and profanity. A filing from a YouTube trust and safety manager filed in the first stage of the PragerU lawsuit revealed that almost 12 percent of PragerU’s YouTube videos are not available in Restricted Mode. According to the filing, the same is true of 54.5 percent of Daily Show YouTube videos and 28.27 percent of Vox.com’s videos. PragerU may not be pleased to have 12 percent of its videos in Restricted Mode, but it is hardly evidence of anti-conservative bias. At the time of writing, almost 2.8 million accounts subscribe to the PragerU YouTube channel. In 2019, the channel received more than 1 billion views.
The political right is not alone in accusing Big Tech of political bias. For example, the World Socialist Web Site wrote a letter to Alphabet and Google executives in 2017, claiming that Google “is manipulating its Internet searches to restrict public awareness of and access to socialist, anti-war and leftwing websites.” Black Lives Matter activists also have accused the video sharing app TikTok and the photo and video social network Instagram of blocking access to content sympathetic to their movement. MoveOn.org, Demand Progress, ClimateTruth.org, Voices for Racial Justice, Daily Kos, and many other left-leaning groups signed a letter to Facebook CEO Mark Zuckerberg expressing their concern that Facebook too often removed content associated with incidents of police brutality, at the behest of police departments.
That elements of both sides of the political aisle perceive political bias in Big Tech is an interesting cultural phenomenon that suggests something about modern political rhetoric, but it doesn’t prove such bias exists.
EXTREMISM AND VIOLENCE
Concerns about extremism are on sturdier empirical ground. In the wake of the shootings at mosques in Christchurch, New Zealand, and a synagogue in Pittsburgh, Pennsylvania, Democratic lawmakers expressed their dismay at the state of social media. Shortly after the Pittsburgh shooting, which resulted in the deaths of 11 people, Sen. Mark Warner (D‑VA) said, “I have serious concerns that the proliferation of extremist content — which has radicalized violent extremists ranging from Islamists to neo-Nazis — occurs in no small part because the largest social media platforms enjoy complete immunity for the content that their sites feature and that their algorithms promote.” Sen. Richard Blumenthal (D‑CT) took to Twitter after a shooter murdered 51 people at two mosques in Christchurch, writing, “Facebook & other platforms should be held accountable for not stopping horror, terror, & hatred — at an immediate Congressional hearing.”
The shootings in Pittsburgh and Christchurch were particularly notable in Big Tech debates because of how the shooters used social media. The Pittsburgh shooter posted: “HIAS [Hebrew Immigrant Aid Society] likes to bring invaders in that kill our people. I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in” on Gab, a social media site linked to right-wing, conspiratorial content. The Christchurch shooter livestreamed the murders to his Facebook page.
While it is undoubtedly the case that the internet provides avenues for people to discover abhorrent views and to build echo chambers, it is not clear that this problem warrants a policy response. Attempts to tackle extremism online risk stifling legitimate speech and entrenching market incumbents. However, the most difficult hurdle to such attempts to overcome is the First Amendment.
In December 2019, a group of Democratic representatives introduced a bill that would establish a commission tasked with researching how online services tackle extremism and would empower the commission to subpoena private companies’ communications. Such powers raise significant First Amendment concerns.
As abhorrent as Gab’s content and the Christchurch shooting video are, they are legal in the United States. Possessing and sharing images and videos of criminal acts is not, absent very few exceptions such as child pornography, a criminal act in and of itself. Freedom of speech is best protected in the United States, where hate speech is legal. In New Zealand, sharing the Christchurch video is a criminal offense.
Attempts to compel a private company to remove such content would not pass constitutional scrutiny. Nor would efforts to compel a private company to disclose communications related to how it deals with legal content. Fortunately, Big Tech firms do take steps to remove extremist content. As the Christchurch shooting video spread across the internet, YouTube jettisoned human moderators and threw AI tools at the problem, willing to embrace false positive takedowns in attempts to purge the video from its platform.
The Google project Jigsaw developed technology designed to identify YouTube users who were at risk of being radicalized by Islamic extremists. The technology has since been used to counter white supremacist radicalization. That efforts by private companies to address extremist content are not perfect should not prompt lawmakers to pursue legislation, which could well run afoul of the First Amendment.
Democrats are not alone in suggesting proposals that would have worrying implications for free speech online. Sen. Josh Hawley (R‑MO) is among the most prominent Republican Big Tech critics and has proposed a number of changes to Section 230. One would remove Section 230 protections from Big Tech companies that display “manipulative, behavioral” advertisements or that collect data to make such advertisements. Another would make Big Tech’s Section 230 protections contingent on a certification from the Federal Trade Commission. This certification would be dependent on Big Tech’s not using content moderation policies that are “biased against a political party, political candidate, or political viewpoint.”
Hawley’s proposals are among the best pieces of evidence of an ideological shift in the Republican Party. Republicans once embraced the rhetoric of limited government and free markets. Today, one of the Republican Party’s rising stars openly calls for a federal agency to influence which content a private company chooses to associate with. The freedom of association is an important feature of free speech. Your freedom to write and submit an article to the New York Times is as important a freedom as the New York Times’s freedom to reject it.
Attempts to reform Section 230 not only put free speech at risk, they also endanger privacy and security. Sen. Lindsey Graham (R‑SC) led a bipartisan effort in 2020 to tackle child sexual abuse material, using Section 230 protections as a carrot to provide incentives to websites. Under Graham’s proposal, called the EARN IT Act, websites would have to earn Section 230 protections by adhering to a set of best practices developed by the attorney general, the chair of the Federal Trade Commission, the secretary of Homeland Security, and members appointed by the majority and minority from both houses of Congress.
Civil liberties experts were quick to point out that if the best practices included a ban on end-to-end encryption, the privacy and security of millions of people would be at risk. Fortunately, the EARN IT Act is not necessary to encourage firms to tackle child sexual abuse imagery. Federal law already makes websites responsible for child sexual abuse imagery they fail to report to federal authorities, and the largest Big Tech firms already cooperate to identify and remove such content.
LIBERTIES AT RISK
There is much at stake in ongoing debates about free speech online. When it comes to impact on the exchange of ideas and influence on culture, the development of the internet is perhaps rivaled only by the invention of the movable-type printing press. The internet not only allows us to discover new ideas; it allows us to find likeminded people and to form communities, whether they are political parties, book clubs, news outlets, or many others. These communities flourish in large part because websites cannot be held liable for the vast majority of content their users post and are free to moderate content as they please.
Accordingly, websites are free to allow visitors to form groups and post content without having to fear that a user’s posting defamatory content will result in costly lawsuits. In addition, they do not have to fear legal reprisals for removing awful but legal content such as beheading videos and images of animals being crushed to death. In such an environment, it is not a surprise that social networking sites have grown into the most popular venues for speech in human history.
Although we have become accustomed to today’s internet, it would be a mistake to take it for granted. Legislative proposals from both the left and the right threaten it. As Congress debates such proposals, we should remember that attempts to reform Section 230 risk infringing on our freedom of association, our freedom of speech, our security, and our privacy.