Photo source: Flickr
Meta is enhancing its use of facial recognition technology to tackle the growing issue of scams that exploit celebrity images in advertisements. Announced on Monday, the initiative aims to strengthen existing measures against fraudulent ads on Facebook and Instagram.
Objectives and Mechanism of the New System
Monika Bickert, Meta’s vice president of Content Policy, detailed in a blog post that these new tests are designed to complement current anti-scam strategies, which include automated scans utilising machine learning classifiers as part of the ad review process. The goal is to make it increasingly difficult for scammers to evade detection and deceive users into clicking on fraudulent ads.
“Scammers often try to use images of public figures, such as content creators or celebrities, to bait people into engaging with ads that lead to scam websites where they are asked to share personal information or send money. This scheme, commonly called ‘celeb-bait,’ violates our policies and is bad for people that use our products,” she stated.
Bickert also acknowledged that while celebrities appear in many legitimate advertisements, distinguishing between real and fake can be challenging due to the sophisticated design of celeb-bait ads.
The facial recognition technology will serve as an additional layer of verification for ads flagged by Meta’s existing systems. It will compare faces in suspicious ads with the profile pictures of public figures on Facebook and Instagram. If a match is confirmed and the ad is identified as a scam, it will be blocked.
Meta assures that any facial data collected during this process will be promptly deleted after the comparison, regardless of whether a match is found.
Deepfake Scams on Social Media
Martin Lewis, a prominent U.K. consumer finance advocate, was targeted by a deepfake scam ad circulating on Facebook last July. The video features an AI-generated likeness of Lewis promoting a fraudulent investment opportunity supposedly backed by Elon Musk. Lewis expresses his anger and concern over the use of his image in such scams and highlights the need for stronger regulation against deceptive advertisements.
He previously sued Facebook over similar issues and settled in 2019, leading to some changes in how the platform handles scam ads. Despite these efforts, Lewis criticises both Meta and the U.K. government for inadequate responses to the ongoing problem of scam advertisements, emphasising that social media remains a “wild west” where scammers operate with little consequence.
Meta stated it is investigating the matter and has removed the original deepfake video along with other similar ads, but did not explain how such content was allowed to be posted initially.
Initial Testing and Future Plans
Early trials involving a limited number of celebrities have yielded promising results, indicating improved speed and accuracy in detecting these scams. Meta plans to expand notifications to a broader group of public figures who have been affected by celeb-bait scams to allow them to opt out if they choose.
In addition to combating scams, Meta is also testing facial recognition technology for account recovery. Users locked out of their accounts due to scams may soon be able to upload video selfies for identity verification. This method aims to provide a quicker alternative to the traditional requirement of submitting government-issued IDs.
“Video selfie verification expands on the options for people to regain account access, only takes a minute to complete and is the easiest way for people to verify their identity,” Bickert stated. The uploaded video selfies will be processed using facial recognition technology, comparing them against users’ profile pictures.
Meta emphasises that all video selfies will be securely encrypted and stored temporarily, and that any generated facial data will be deleted immediately after verification.
Regulatory Considerations
While these tests are being conducted globally, Meta has pointed out that they are not currently taking place in the U.K. or European Union due to stringent data protection regulations requiring explicit consent for biometric data usage.
“We are engaging with the U.K. regulator, policymakers, and other experts while testing moves forward,” said Andrew Devoy, a spokesperson for Meta. The company aims to gather feedback from these stakeholders as it develops its features further.
Despite potential acceptance of facial recognition for security purposes by some users, concerns remain regarding its application in training commercial AI models. Meta previously discontinued its facial recognition system in 2021 due to privacy issues but is now reintroducing it under strict conditions to improve user safety against online fraud.