Meta has been urged to take stronger action against fake videos created with artificial intelligence (AI) on its platforms. The company’s own Oversight Board warned that the spread of AI-generated content is making it harder for users to tell fact from fiction.
The 21-member board criticized Meta for allowing an AI-generated video showing alleged damage in Haifa by Iranian forces to remain online without a label. The board called on Meta to overhaul its AI content rules.
“Fake AI videos, especially about global military conflicts, risk a general distrust of all information,” the board said. Meta promised to label the video within seven days.
Oversight Board Background
Meta created the Oversight Board in 2020 as a semi-independent group to supervise content moderation across Meta platforms, including Facebook, Instagram, and WhatsApp.
The board often disagrees with Meta’s decisions. Despite its recommendations, Meta has continued to relax its content moderation, raising questions about the board’s influence.
Problems with Meta’s Current AI Policies
The board noted that Meta relies mainly on users to self-disclose AI-generated content. If no complaint is filed, Meta may not label a video at all. The board said this approach is “neither robust nor comprehensive enough,” especially during conflicts or crises when engagement is high.
The review was triggered by a video posted last June by a Facebook account in the Philippines, claiming to be a news source. The video was one of many fake AI videos about the conflict in Israel, which together gathered millions of views.
Even after several users reported the video, Meta did not label or remove it. Only after a direct appeal to the Oversight Board did the company take action.
Meta argued that the video did not pose “imminent physical harm,” so it did not require labeling. The board disagreed, ruling that it should have received a “high risk AI label.”
Board Recommendations
The Oversight Board emphasized that Meta must act faster to label deceptive AI content. Users should be able to distinguish real content from fake, especially during armed conflicts.
Meta responded, saying it will follow the board’s guidance for future content in similar contexts.
