Policymakers worldwide are increasingly advancing policies related to content moderation. From the left, there are efforts to stop hate speech and misinformation, as seen in New York’s Online Hate Speech Law and the European Union’s Digital Services Act. From the right, there are efforts that try to force social media companies to host content from certain political speakers or viewpoints, as seen in legislation in Texas and Florida. Despite the intensity of these concerns—some of which may be valid—efforts to regulate content moderation often reflect a lack of understanding of how content moderation works.
Policymakers should understand that content policies are rules, protected by the First Amendment, which organizations use to create their preferred spaces. These policies must work when applied to billions of different pieces of content. No matter the principles a platform holds and no matter the wishes or intentions of policymakers, these companies need policies that they can implement effectively and consistently, something that government regulation generally undermines. Content moderation also comes in all shapes and sizes, including an increasing interest in giving users greater control over their experiences online. Government restrictions and requirements will likely prevent future innovations that better serve and empower users. Instead, those who value a culture of free expression should engage with current and emerging social media platforms to push back against the prevailing norms that are critical of expression and instead affirm the importance of giving people a voice.