The bipartisan, process oriented “Platform Accountability and Consumer Transparency Act” joins a recent parade of Section 230 reform proposals. Sponsored by Sens. Brian Schatz (D‑HI) and John Thune (R‑SD), the PACT Act proposes a collection of new requirements intended to optimize social media platforms’ governance of user speech. These government mandated practices for handling both illegal speech and that which merely violates platform community standards would upset delicate, platform specific balances between free expression and safety. While more carefully constructed than competing proposals, with provisions actually tailored to the ends of accountability and transparency, the bill threatens to encumber platforms’ moderation efforts while encouraging them to remove more lawful speech.

The PACT Act would establish a process for removing illegal speech, giving platforms 24 hours to remove content that deemed illegal by a court. A company that fails to act would lose Section 230’s protections against liability. Such protections are generally thought essential to these companies. Leaving decisions of legality to the courts is important, it preserves democratic accountability and prevents platforms from laundering takedown demands that wouldn’t otherwise pass legal muster. Under Germany’s NetzDG law platforms must remove “manifestly unlawful” content within 24 hours or risk steep fines, a set of incentives that have encouraged the removal of legal speech on the margins.

The bill’s proposed process for removal would be improved by the addition of a counter‐​notice system, more specific illegal content identification requirements, and a longer takedown window to allow for either user or platform appeal. Still, it is a broadly reasonable approach to handling speech unprotected by the First Amendment.

The breadth of covered illegal content is somewhat limited, including only speech “determined by a Federal or State court to violate Federal criminal or civil law or State defamation law.” This would exclude, for instance, New Jersey’s constitutionally dubious prohibition on the publication of printable firearms schematics.

While the legal takedown mechanism requires a court order, the bill’s requirement that platforms investigate all reports of community standards violations is ripe for abuse. Upon receiving notice of “potentially policy‐​violating content,” platforms would be required to review the reported speech within 14 days. Like law enforcement, content moderators have limited resources to police the endless flow of user speech and must prioritize particularly egregious or time sensitive policy violations. Platform‐​provided user reporting mechanisms are already abused in attempts to vindictively direct moderators’ focus. Requiring review (with a deadline) upon receipt of a complaint would make abusive flagging more effective by limiting moderators’ ability to ignore bad‐​faith reports. Compulsory review will be weaponized by political adversaries to dedicate limited platform enforcement capacity to the investigation of their rivals. Community standards can often be interpreted broadly; under sustained and critically directed scrutiny even broadly compliant speakers may be found in breach of platform rules. Moderators, not the loudest or most frequent complainants, should determine platform enforcement priorities. While the bill also mandates an appeals process, this amounts to a simple re‐​review rather than an escalation and will at best invite an ongoing tug of war over contentious content.

Some of the bill’s components are constructive. Its transparency reporting requirements would bring standardization and specificity to platform enforcement reports, particularly around the use of moderation tools like demonetization and algorithmic deprioritization. This measure would formalize platforms’ hitherto voluntary enforcement reporting, allowing for better cross‐​platform comparisons and evaluations of disparate impact claims. Beneficent intentions and effects aside, as requirements these reporting standards would likely raise compelled speech concerns.

However, other aspects of the bill are sheer fantasy in the face of platform scale. PACT would require platforms to maintain “a live company representative to take user complaints through a toll‐​free telephone number,” during regular business hours. If in a given day even a hundredth of a percent of Facebook’s 2.3 billion users decided to make use of such an option, they would generate tens of thousands of calls. In the early days of Xbox Live, Microsoft maintained a forum to answer user moderation complaints. The forum was so inundated with unreasonable and inane questions that the project was later abandoned. While Microsoft may have incidentally provided some valuable civic education, other platforms should not be required to replicate its Sisyphean efforts.

xbox screenshot

Provisions drafted without regard for the demands of scale will fall hardest on small firms trying to scale, hindering competition. Parler is a conservative alternative to twitter with 30 employees and 1.5 million users as of June. It’s large enough to lose the bill’s small business exemption (applying to services with less than a million monthly users or visitors) but not large enough to dedicate employees to call‐​center duty or review all reports of policy violations within 14 days.

There is a real danger that the bill may be treated as a solution to perceived problems with Section 230 simply because it is less radical or more thoughtfully drafted than competing proposals for reform. The PACT Act may be better than other proposed modifications, but that doesn’t make it, on net, an improvement on the status quo. While bill would increase the transparency of moderation, its impositions on policy enforcement and appeals processes will create more problems than they solve. In heaping new demands on complex moderation systems without regard for platform constraints, the PACT Act places a thumb on the scale in favor of removal while creating new avenues for abuse of the moderation process.