Despite our hyperpartisan politics, President Biden and his predecessor agree on at least one issue: Section 230 of the Communications Decency Act should be overhauled or repealed. Section 230 immunizes internet sites from liability for two activities (with exceptions, mostly related to the sex trade and federal criminal law). First, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” So, Twitter is not liable for Donald Trump’s tweets. Nor is the New York Times liable for online comments from readers. Second, no website is liable for “restrict[ing] access” to content that it “considers to be obscene … excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” So, Twitter is not liable if it refuses to post Trump’s tweets. Nor would an online blog hosted by the Times be liable if it refused to post a reader’s comments.

Of course, private websites also have property rights — obviously a sports blog doesn’t have to post gardening ideas — as well as First Amendment defenses. Section 230, however, goes beyond the First Amendment. For example, were it not for Section 230, a court might find Facebook liable for defamation, which is not constitutionally protected. Ditto if Facebook blocked information from, say, the NAACP, notwithstanding public accommodations laws that forbid racial discrimination. Most important, Section 230 shields websites from the complications and costs of assorted lawsuits.

Does Section 230 go too far? From the left, we hear that websites are guilty of underfiltering — for example, allowing the posting of material that’s sexist, racist, dangerous, misleading, abusive, or worse. Liberals seem to believe that gullible Americans are so hoodwinked by social media that they can’t be trusted to ignore unreliable or offensive information. Our recent election suggests differently. Nonetheless, some in Congress threaten more regulation unless websites censor noxious speech. Never mind that the First Amendment bars government from coercing private parties to do what government itself may not do.

From the right, we’re advised that Big Tech is guilty of overfiltering — that is, censoring conservatives. Accordingly, say some critics, we should reinstitute a version of the fairness doctrine, which required balanced views about controversial issues. Most of the doctrine was formally repealed in 1987 and all of it by 2011, but the Supreme Court upheld it in 1969, principally because government had licensed favored broadcasters to use scarce radio frequencies. That scarcity rationale does not apply to the internet.

Today, government doesn’t allocate social media frequencies. Moreover, the giants — Amazon, Apple, Facebook, Google, and Twitter — are intensely competitive. Multiple other social media companies (e.g., Reddit, Discord, LinkedIn, and Snapchat) boast hundreds of millions of followers. The availability of alternative channels of communication blunts the market power argument. There’s simply no need to foist neutrality on social media. Ironically, more government regulation would ultimately concentrate market-power in giant companies that can best afford the heavy burden of compliance and litigation. That may explain why Facebook seems receptive to federal intervention.

Nor should we saddle websites with the unmanageable task of vetting billions of daily posts. The effect would be fewer sites, less speech, and insipid, politically correct content. Perhaps deep-pocketed Big Tech defendants could dodge the tort lawyers by employing algorithms that offer gradations of moderation to their clientele. Hypersensitive users might prefer coddling; other users might opt for unrestrained discourse. That choice would be up to the website and its customers without government entanglement.

The optimal solution is to leave Section 230 as is. But if Congress insists on fixing what isn’t broken, here’s a possible compromise: condition Section 230 immunity on a good-faith effort not to muzzle constitutionally sheltered speech. Twitter would still decide what tweets to curate or ban. But if Twitter substantively edited or excluded a protected communication, that action might be challenged under the same liability rules applicable to publishers. Fortunately, those rules have been informed by First Amendment jurisprudence that has repudiated compelled speech. Over time, expanded legal precedents would likely fortify interactive computer services against oppressive lawsuits. The result would be imperfect but markedly better than treating all content transmitters as content creators.