High-risk, manipulated content can remain on Facebook, Meta Oversight Board rules
In a recent decision on November 25, Meta’s Oversight Board ruled that manipulated videos, particularly those featuring “high-risk” political figures, do not necessarily need to be removed from Facebook, but they should be labeled more effectively. The case arose when a user appealed for the removal of a viral video that misrepresented public support for Philippine President Rodrigo Duterte by using misleading footage. Despite the video’s inaccuracies, it slipped through Meta’s automated flagging system and human review, highlighting the challenges the platform faces in managing misinformation.
While the Oversight Board acknowledged that the video should have undergone a more rigorous fact-checking process and warranted a “high-risk” content label, it ultimately supported Meta’s decision to keep the video online. The board noted that the content did not explicitly breach Meta’s political information guidelines, which primarily focus on preventing misleading posts about voting procedures and candidate eligibility. However, the board emphasized the necessity for Meta to take misinformation campaigns more seriously, advocating for improved review processes for viral misleading content and the implementation of a specific “High-Risk” label for such videos. This recommendation comes amid a broader trend where Meta has been moving away from stringent content moderation practices, opting instead for a community-driven approach to fact-checking, which has raised concerns about the effectiveness of combating misinformation on the platform.
The Oversight Board’s stance aligns with its previous calls for social media platforms to enhance their labeling of manipulated or AI-generated content. It highlights the need for a balanced approach that combines automated moderation with adequate human oversight. As misinformation continues to proliferate online, the board’s recommendations urge Meta to prioritize transparency and accountability in its content moderation strategies. The ongoing debate around the responsibilities of social media platforms in curbing misinformation remains critical, especially as manipulated content can significantly influence public perception and political discourse.
https://www.youtube.com/watch?v=Sjf5m2fVtPc
Manipulated videos, including ones related to politicians deemed “high-risk,” don’t have to be culled from Facebook feeds, according to
Meta
‘s Oversight Board — but they should at least be better labeled.
The decision
Nov. 25 decision
was spurred by a user who appealed to remove a viral video that appeared to show global demonstrations in favor of Philippine President Rodrigo Duterte. Despite using mislabeled, erroneous footage to suggest there was a widespread pro-Duterte movement, Meta did not remove the post via its automated flagging process or further human review.
SEE ALSO:
OpenAI denies allegations in teen death lawsuit
While the Oversight Board agreed that the video should have been escalated higher in the fact-checking process and labelled as particularly “high-risk” content, it still sided with Meta’s choice to keep the video online because it did not specifically violate the company’s political information guidelines. Meta prohibits misleading posts about voting locations, processes, and candidate eligibility.
The board encouraged the company to take concerted, viral misinformation campaigns seriously, writing that it is “imperative that Meta has robust processes to address viral misleading posts, including prioritizing identical or near-identical content for review, and applying all its relevant policies and related tools.” Similar to other recent decisions from the board, it recommended Meta add a specific “High-Risk” label to videos with similar “because it contained a digitally altered, photorealistic video with a high risk of deceiving the public during a significant public event.”
The decision aligns with the tech giant’s shift away from stronger content moderation guidelines. The board has previously written in
favor of social media companies using AI-powered, automated moderation
to better address an onslaught of misinformation, within reason, and
promoted more robust labelling
of manipulated or AI-generated content as an additional guardrail. “Platforms should apply labels indicating to users when content is significantly altered and could mislead, while also dedicating sufficient resources to human review that supports this work,” the board wrote in a previous blog post.
Meanwhile, Meta has slimmed down its human fact-checking team in favor of a global
community notes program
.