Canterbury Today Logo
Sponsored:
Elevate Magazine
Elevate Magazine
Sponsored:
Canterbury Today Logo

Users Criticise Meta for Mistakenly Labelling Real Photos As Made With AI

real photos as made with ai
Loading the Elevenlabs Text to Speech AudioNative Player...

In an effort to help users identify photos generated by artificial intelligence, Meta announced earlier this year that starting May 2024, the company will begin labelling content on Facebook, Instagram, and Threads as “Made with AI.” Now, it appears that Meta is struggling to accurately do so as some users complain that the company erroneously adds “Made with AI” labels even on real-life photos and images created without using generative AI models. 

Pete Souza, who served as the official White House photographer during the Obama and Reagan administrations, has encountered the issue. Souza posted some images he had taken at a 1984 NBA Finals game to his Instagram account. The post was then AI-labelled by Meta which Souza found “annoying” because Instagram “forced me to include the ‘Made with AI’ tag even though I unchecked it.” He noted that Adobe required him to “flatten the image” before saving it as a JPEG file, which he speculated could have been the cause of the incorrect labelling. 

A cosplayer also posted on social media about an Instagram photo that had been mistakenly tagged as being “Made with AI.” 

“My cosplay made with the physical labour of my own hands is being labelled as AI content on Instagram, with no way to correct this tag,” the cosplayer wrote on X. 

In response to the criticism, a Meta spokesperson emphasised that the company’s “intent has always been to help people know when they see content made with AI.” They noted that Meta is “taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image.”

Accurately tagging AI-generated images enhances digital literacy and builds trust among social media users and audiences who interact with visual content online. Clearly labelling images allows viewers to be informed about the origin of the content, which reduces their likelihood of believing in false and misleading information. Moreover, warning labels, when correctly applied, support ethical considerations by holding creators accountable for their AI-generated material, respecting intellectual property rights, and promoting the responsible use of technology.