March 12, 2026

Oversight board criticises Meta on AI fakes

meta deepens nvidia ai chip partnership
Photo source: CNN

Meta’s independent overseers have accused the social media powerhouse of failing to tackle a rising tide of AI-generated fakes, particularly those stirring tensions in global hotspots like the Israel-Iran conflict.

The Oversight Board, a panel of 21 experts established by Meta in 2020 to challenge content moderation choices across Facebook, Instagram, and WhatsApp, spotlighted a glaring lapse with a video uploaded last June by a Philippines-based page posing as a news outlet.

The AI-crafted footage falsely depicted Iranian missiles inflicting massive damage on Haifa, Israel, and amassed nearly 1 million views without any label despite user complaints. Only after a direct appeal to the board did Meta respond, initially arguing the clip posed no “risk of imminent physical harm” and thus needed neither a tag nor removal.

In a ruling issued Tuesday, the board rejected that stance, insisting the video warranted a “high risk AI label” and demanding sweeping changes to Meta’s policies. Relying on self-reporting by posters or reactive flags is inadequate, the experts said, as current tools are “neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform.” 

meta ai deepfakes
Photo source: PCMag.com

They tied this to longstanding “inefficiencies in Meta’s current approach during armed conflicts,” while urging proactive labelling to help users separate truth from fiction. “Meta must do more to address the proliferation of deceptive AI-generated content on its platforms, so that users can distinguish between what is real and fake,” the board emphasised.

This case fits a pattern: BBC analysis identified waves of similar pro-Israel and pro-Iran AI videos exploding to over 100 million views since hostilities flared, with Reuters noting their rapid spread on Meta’s networks.

Meta pledged to label the Haifa clip within seven days and heed the guidance for comparable future posts, but ongoing clashes—tracked on the board’s site—highlight its loosening moderation grip amid EU probes under the Digital Services Act.

As the Atlantic Council warns, such lapses risk shattering public trust in digital news, especially in theatres like Ukraine or the Middle East, despite Meta’s Q4 2025 pledges on AI detection.

Subscribe for weekly news

Subscribe For Weekly News

* indicates required