This response seeks to provide the Oversight Board with feedback on:

  • The challenges with expanding Meta’s content policies to cover editing of videos in simple ways that may be misleading in certain contexts.
  • Problems with expanding fact-checking, especially into contentious social and political issues.
  • Alternative paths forward for ascertaining truth and dealing with misinformation and manipulated media.

This case entails a piece of content in which a video of a President Biden is simply edited with a loop. In this sense, the video is not false or fake, but clipped at a specific time and then allowed to play back again. This is a common and basic editing technique that is part of most tools on phones and social media platforms (including Instagram’s boomerang feature). This basic editing technique is not much different from clipping a small portion of a longer video, nor is it substantively different from taking a snapshot of a video to portray one moment from a video. It is also similar to gifs that can be created and shared by anyone through readily available websites and technology. [From here on out we will refer to loops, clips, gifs, and screenshots as “simple editing.”] Such simply edited clips are ubiquitous and used in countless political communications:

  • Gifs and clips of President Trump making strange and exaggerated faces or hand motions.

  • Jeb Bush’s “please clap” clip.

  • Clips of Mitt Romney talking about “binders of women”.

  • Other simply edited imagery that show President Biden awkwardly touching women.

This piece of content makes an additional claim regarding President Biden as a pedophile, as his hands come close to his granddaughter’s breast. But, as noted above, a mere screenshot or clipping of this video could have accomplished a similar result by showing the President apparently touching near her breast. Thus, the content presents a broad challenge: should Meta remove — or otherwise suppress —any simply edited imagery that could be misleading? For multiple reasons, the answer should be no. Starting with the scale, the sheer amount of content that could be considered misleading because of simple editing is near endless. Creating a broad policy to cover this sort of content would stifle countless expressions of political or non-political speech. The basic photo features of our iPhones and the boomerang feature built into Instagram itself could be violating if posted to Meta’s platforms. Even if not proactively enforced (but only enforced upon report), that amount of content could still be massive. This is especially true for Instagram, where every post is a picture or video and many, if not most, have been simply edited.

A new policy in this space will also likely silence true criticisms and create opportunities to game the system. In this case, President Biden has faced accusations of being too touchy with women and children. Many of these criticisms are made via short video clips or screenshots of videos. Regardless of what one thinks about President Biden’s mannerisms, there will be other individuals who may engage in malicious behavior on camera but, when simply edited, could run afoul of a policy in this area, silencing victims and protecting aggressors. Satirical and humorous efforts that rely on simply edited imagery to criticize public figures could similarly be silenced as manipulated misinformation. The whole point of such content is often to use exaggeration or absurdity and so could be considered misleading or misinformation. Such a policy, especially if enforced upon report by users, would also create opportunities for bad actors to game the reporting system, such as in the Board’s “Russian bot” case. Political campaigns could report any simply edited content that portrays their candidate in compromising or awkward position. This would empower rich and powerful users as well as trolls to silence various types of speech on the platform.

It is critical that criticism of public figures and leaders be allowed and not hampered in the name of countering misinformation (indeed this is why Meta has different standards under its Bullying and Harassment policy for attacks on public figures vs private individuals). The harm done to expression from such a policy far outweighs any vague and undefined harm of misinformation. Certainly, in this case, the harm is approaching zero, with no violence or offline actions taken and the mistake easily remedied with footage of the event by anyone who wants to look deeper.

This leads to the fundamental challenge of determining what is “true” and what is “false.” We do not hesitate to venture that it is not possible for Meta’s reviewers or automated system to enforce a policy in this area at scale. This is why, currently, Meta only removes specific, enumerated harmful claims of misinformation or the use of deepfakes where the editing is so severe and advanced that it is difficult for a user to know if the video is false. For all other content, Meta uses external fact-checkers in a more ad hoc manner rather than the determination of its own systems or reviewers. Even assuming no biases, asking reviewers or automated systems to accurately adjudicate an endless number of claims based on simply edited content is an endeavor bound to fail. How will reviewers or automated systems search out the truth and know when a simply edited piece of content is “too misleading”? And what about evolving situations where the full facts are just not known? And then we must add biases that are also extremely likely to emerge when dealing with political and social issues. As a result, beyond the negative impact of the sheer size of this censorial effort, it would also further undermine faith in the fairness in Meta’s adjudications.

This brings us to the question of fact-checking itself and the efforts to label content accordingly. The same problems described above also exist for efforts to label or inform. While labeling is a less severe action, and certainly preferably to removal of content, figuring out when content that has been simply edited is misinformation versus sufficiently truthful will be a herculean task with massive negative impacts to user expression. Rather than the challenge of internal accuracy and bias, using fact-checkers raises the same trouble, but with external groups. The public failures of misinformation and fact-checking described below have already significantly undermined faith in such efforts and expanding their remit will only worsen the trust deficit.

When discussing social media fact-checking, it is important to underscore from the start an ideological cleavage that has not been overcome, and only deepened over time. A major challenge is that certain political and philosophical viewpoints, especially those focused on freedom of expression, have little interest in serving as formal fact-checkers that suppress and remove speech. So, there is an inherent selection bias built into any fact-checking system that enforces truth — only those that want to suppress content and want to work with social media companies join such programs. Given this ideological lack of diversity, “expert” driven fact-checking is likely only to further undermine faith in Meta and media fact-checkers as overly censorial. Whether it be a political bias, bias toward certain government or social narratives, or simply a different view of what should be considered misinformation, many examples illustrate the failure of the existing system. Many claims about COVID — ranging from vaccines to lockdown and masking policies —made by governments and experts turned out to be wrong, or at least not as ironclad as they asserted. They could not be questioned, even by other experts legitimately inquiring about conclusions or public policies that did not appear to be well supported in the data. Moreover, social media companies relied on governments and fact-checkers to determine these narratives. Add to this many fact-checking incidents on social or political issues where the tone, perspective of poster, concerns about how others may use factual information, making a “prediction we can’t fact-check,” the fact-checker’s or a social media company’s assumptions, or just inaccuracy are used to suppress opinions or otherwise truthful arguments.

Worse still is the fact that appeals of fact-checks can only be made to the organization that made the fact-check and fact-checkers can act with near impunity. Meta rarely acts to remove a fact-checker and absolves itself of responsibility by saying that any fact-check must appealed to the original independent fact-checker —an appeals system that 85% of American users think needs to change. Meanwhile, fact-checkers can do whatever they want with an appeal and claim to be a neutral cog in the system Meta set up. With a lack of accountability for those in the fact-checking program on top of a program inherently drawing organizations that are supportive of suppression, the ancient problem of quis custodiet ipsos custodes — “Who watches the watchers?” — is just as relevant today as it was 2,000 years ago.

Ultimately, the epistemological challenge of determining truth must involve a system that is open to challenge. As Jonathan Rauch has noted, the two fundamental rules of liberal inquiry are that 1) no one gets the final say and 2) no one has personal authority. Methods based on iterative consensus that allow divergent views to challenge one another are the best way to reach truth and meaningfully correct those in error. Appeals to authoritative fact-checkers or governments that cannot be meaningfully appealed fail both of those principles. As a private company, Meta is not under an obligation to subject itself to such a standard, but it has indicated that it values both free expression and consumer trust. The Oversight Board is effectively tasked with helping the company handle difficult content moderation questions where there may be conflicting views or values at hand and suggesting best practices, including such frameworks for handling difficult or disputed decisions around the content available.

Thus, a better approach in line with such a framework could be something along the lines of X or Twitter’s community notes, which require some level of consensus by diverse types of users. By merely adding context instead of more aggressive suppression, “good speech” is provided to users to counter bad information. This kind of notification would likely have been effective with this Biden content, as the full video of the event would have given people the information they needed to decide for themselves what President Biden was doing. Building consensus among different users to find key facts is also more likely to positively move people towards being better consumers of information that hold institutions to account, rather than current appeals to authority that are undermining trust in various institutions.

Another example of how community consensus over the factual nature of information may be provided the Wikipedia editorial process. While open to all and certainly subject to vandalism and misleading claims at times, the wisdom of the crowd has yielded general factual consensus — even on disputed facts. It also creates a labeling system that indicates when claims are not backed up by citations or sufficient support, but does not itself decide on the replacement or appropriate citation. Such continual refinement may provide an alternative approach on how consensus may be reached while still highlighting uncertainty.

One simple proposed solution may initially appear to be to tag or flag all “manipulated” media or media that contains AI. However, this may actually result in a meaningless label if applied broadly or risk the appearance of selective application if applied narrowly. A broad interpretation of what qualifies could result in labeling any photo with a filter being labeled as “manipulated” media. Similarly, even in the political advertising realm, an overly broad interpretation could result in labeling ads as manipulated if they have accessibility features like autogenerated captioning, AI voices translating an ads voiceover to Spanish, or the use of AI to remove background noise. The result could be that the labeling of that media as potentially manipulated results in fatigue for the consumer such that it no longer triggers an appropriate scrutiny of what they are consuming. If a manipulated, distorted, or AI-generated media label were to be developed, the rules of what will and will not receive this label should be clear to consumers, content creators, and advertisers, and the application should be such that it does not target only particular viewpoints.

In sum, efforts to add a new policy to counter simply edited videos that may be considered by some to be misinformation could significantly harm both political and non-political expression, be abusable by those with more resources and internet trolls, present a problem that will be impossible to handle at scale with any semblance of fairness, and further undermine faith in Meta’s fairness or the fairness of its fact-checking enterprise. A better approach would be to utilize methods of providing greater context that rely on iterative consensus and are open to being meaningfully challenged by different facts and viewpoints. For AI-generated content, labels could be applied to notify users of such content, but such labels also risk fatiguing users if broadly applied or being viewed as selective bias if narrowly applied.

The experts make these comments in their individual capacities.