Author’s Note: This post originally concerned a draft executive order. What follows is a discussion of the final order. The original analysis can be found below that.
Yesterday, I wrote about a draft of the President’s executive order, which he went on to sign in the afternoon. The White House released a final version of the order last night. It differs significantly from the draft in verbiage, though not in effect.
In some instances, the language has been watered down. However, crucially, the final order contains the same unsupported contention that the protections offered by 230 (c)(1) are contingent upon platforms moderating in accordance with some stricter understanding of (c)(2)(A).
It is the policy of the United States to ensure that, to the maximum extent permissible under the law, this provision is not distorted to provide liability protection for online platforms that — far from acting in “good faith” to remove objectionable content — instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.
It is the policy of the United States that the scope of that immunity should be clarified: the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.
In some ways, these claims are more limited than those in the draft. However, the “distortion” and “extension” of Section 230 described in the final order is, in fact, the longstanding, textually supported reading of the law. As I outlined yesterday, (c)(1) and (c)(2)(A) protections are separate. It is not an extension of the law to apply them separately, and any “clarification” otherwise would amount to an amendment.
Confusingly, the final order contains a paragraph that might more strongly assert a connection between the first and second subsections; however, the second time it refers to (c)(2)(A), it does so in a context in which only (c)(1) would make sense:
When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.
The liability faced by traditional publishers, which prescreen material rather than moderating ex-post, is foreclosed by (c)(1), not (c)(2)(A). If, as is likely, this line was meant to reference (c)(1), the order more stridently misinterprets Section 230. The protections offered by the first and second subsections are entirely separate, making the President’s directive to NTIA, instructing them to petition the FCC to examine connections between (c)(1) and (c)(2)(A), facially absurd.
… requesting that the FCC expeditiously propose regulations to clarify: (i) the interaction between subparagraphs (c)(1) and (c)(2) of section 230, in particular to clarify and determine the circumstances under which a provider of an interactive computer service that restricts access to content in a manner not specifically protected by subparagraph (c)(2)(A) may also not be able to claim protection under subparagraph (c)(1), which merely states that a provider shall not be treated as a publisher
There are no circumstances under which a provider that restricts access in a manner unprotected by (c)(2)(A) loses (c)(1) protections. (c)(1) protections are lost when a platform authors content, making it the platform’s content rather than that of a third party. (c)(1) is not in any way contingent on (c)(2)(A). The order invites the FCC to make a miraculous discovery completely at odds with settled law or return a pointless null result.
Finally, the order directs the FTC to investigate platforms for moderating in a manner inconsistent with their stated terms of service:
The FTC shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce … Such unfair or deceptive acts or practice may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.
Platform terms of service or community standards are not binding contracts. They lay out how a platform intends to govern user speech but often change in response to new controversies, and automated moderation at scale is frequently imprecise. In light of Trump’s recent personal spats with social media firms, any subsequent FTC action may appear politically motivated.
In sum, the order makes a number of sweeping, unfounded claims about the breadth and intent of Section 230. The declarations of government policy are concerning: “all executive departments and agencies should ensure that their application of section 230(c) properly reflects the narrow purpose of the section.” However, the administration’s proposed interpretation is so at odds with a plain reading of the statute and controlling precedent that courts are unlikely to uphold decisions based on this official misinterpretation.
The order’s substantive elements require action on the part of the FCC and FTC. Their response will largely determine the order’s scope and effect. The FCC could nonsensically determine that (c)(1) had been contingent on (c)(2)(A) all along and the FTC could aggressively pursue tech firms for moderation inconsistent with their terms of service but, given the likelihood of judicial resistance, a hard-charging response is improbable. Like so much else from the Trump administration, it may turn out to be another order full of sound and fury that ultimately delivers nothing in the way of substantive change. Nevertheless, even if the order is ineffective, it represents a worrying belief that the President can twist and reinterpret even long-settled law to fit his political agenda.
President Trump has escalated his war of words with America’s leading technology firms. After threatening to “close down” social media platforms, he announced that he would issue an executive order concerning Section 230 of the Communications Decency Act, a bedrock intermediary liability protection for internet platforms. However, a draft of the forthcoming executive order seems to slyly misunderstand Section 230, reading contingency into its protections. Let’s take a look at the statute and the relevant sections of the proposed executive order to see how its interpretation errs.
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
- any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;
or
- any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
The statute contains two parts, (c)(1) and (c)(2). Subsection (c)(1) prevents providers of an “interactive computer service,” be they Twitter, or a blog with a comments sections, from being treated as the publisher of their users’ speech. 230 (c)(2) separately addresses providers’ civil liability for actions taken to moderate or remove content.
The executive order obfuscates this distinction, presenting (c)(1) as contingent on (c)(2). The EO contends that “subsection (c)(2) qualifies that principle when the provider edits content provided by others.” This is simply incorrect. Subsection (c)(2) protects platforms from a different source of liability entirely. While the first subsection stops platforms from being treated as the publishers of user speech, (c)(2) prevent platforms from being sued for filtering or removal. Its protections are entirely separate from those of (c)(1); dozens of lawsuits have attempted to treat platforms as the publishers of user speech, none have first asked if platforms’ moderation is unbiased or conducted in good faith. Even if a provider’s moderation were found to breach the statute’s “good faith” element, it would merely render them liable for their moderation of the content in question, it wouldn’t make them a publisher writ large.
The executive order makes its misunderstanding even more explicit as it orders the various organs of the federal government to similarly misinterpret Section 230.
When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct. By making itself an editor of content outside the protections of subparagraph (c)(2)(A), such a provider forfeits any protection from being deemed a “publisher or speaker” under subsection 230(c)(1), which properly applies only to a provider that merely provides a platform for content supplied by others. It is the policy of the United Sates that all departments and agencies should apply section 230(c) according to the interpretation set out in this section.
The order goes on to direct the National Telecommunications and Information Administration to petition the FCC, technically an independent agency, to promulgate regulations determining what sort of moderation breaches the good faith aspect of (c)(2), and, according to the administration’s erroneous reading of the statute, triggers the forfeiture of (c)(1) protections against being treated as a publisher.
Clearly, none of this is actually in Section 230. Far from expecting websites to “merely provide a platform,” (c)(2)(A) explicitly empowers them to remove anything they find “otherwise objectionable.” Our president seems to have decided that Section 230(c)(1) only “properly applies” to social media platforms that refrain from responding to his outlandish claims. Republicans might want to amend Section 230 so that it only applies to conduit-like services, however, any attempt to do so would face stiff opposition from democrats who want platforms to moderate more strictly. Like Obama before him, President Trump may have a pen, but he cannot rewrite statutes at will. As drafted, his order’s reasoning is at odds with congressional intent, a quarter century of judicial interpretation, and any reasonable reading of the statute’s plain language.