Meta is expanding facial recognition testing as an anti-scam measure to combat fraudulent celebrity ads and more broadly, Facebook's owner announced Monday.
Monika Bickert, vice president of content policy at Meta, wrote in a weblog post that some of the tests are intended to bolster its existing anti-scam measures, such as automated scans (using machine learning classifiers) run as part of its ad review, to make it harder for scammers to go undetected and trick Facebook and Instagram users into clicking on fake ads.
“Scammers often try to use images of public figures, such as content creators or celebrities, to entice people to interact with ads that lead to fraudulent websites where they are asked to share personal information or send money. “This scheme, commonly called ‘celebrity baiting,’ violates our policies and is bad for the people who use our products,” he wrote.
“Of course, celebrities appear in many legitimate advertisements. But because celebrity bait ads are often designed to look real, it's not always easy to spot them.”
The tests appear to be using facial recognition as a backup to verify ads flagged as suspicious by existing metasystems when they contain the image of a public figure at risk of being so-called “celebrity bait.”
“We will attempt to use facial recognition technology to compare the faces in the ad to the public figure's Facebook and Instagram profile pictures,” Bickert wrote. “If we confirm a match and that the ad is a scam, we will block it.”
Meta claims that the feature is not used for any purpose other than combating scam ads. “We immediately delete any facial data generated from ads for this one-time comparison, regardless of whether our system finds a match, and do not use it for any other purpose,” it said.
The company said that early tests of the approach, with “a small group of celebrities and public figures” (it did not specify who), have shown “promising” results in improving the speed and effectiveness of detecting and combating this type. of scam.
Meta also told TechCrunch that he believes the use of facial recognition would be effective in detecting deepfake scam ads, where generative AI has been used to produce images of famous people.
The social media giant has been accused for many years of failing to stop scammers from misappropriating the faces of famous people in an attempt to use its advertising platform to trick unsuspecting users into scams such as dubious cryptocurrency investments. So it's an interesting time for Meta to push facial recognition-based anti-fraud measures for this issue now, at a time when the company is simultaneously trying to obtain as much user data as possible to train its business models. of AI (as part of the broader industry-wide fight to develop generative AI tools).
In the coming weeks, Meta said it will begin showing in-app notifications to a larger group of public figures who have been attacked by celebrities, letting them know they are being enrolled in the system.
“Public figures enrolled in this protection can opt out of their Account Center at any time,” Bickert said.
Meta is also testing the use of facial recognition to detect celebrity imposter accounts (for example, where scammers seek to impersonate public figures on the platform to expand their fraud opportunities), again using AI to compare profile images in a suspicious account with a suspicious account. Facebook and Instagram profile photos of public figures.
“We hope to try this and other new approaches soon,” Bickert added.
Video selfies plus AI to unlock accounts
Additionally, Meta has announced that it is testing the use of facial recognition applied to video selfies to enable faster account unlocking for people who have been blocked from accessing their Facebook/Instagram accounts after scammers have taken possession of them (for example, if a person was tricked into handing over their passwords).
This seems intended to appeal to users by promoting the apparent usefulness of facial recognition technology for identity verification; Meta implies that it will be a faster and easier way to regain account access than uploading an image of a government-issued ID (which is routine). route to unlock access access now).
“Selfie video verification expands options for people to regain account access, only takes a minute to complete, and is the easiest way for people to verify their identity,” Bickert said. “While we know that hackers will continue to try to exploit account recovery tools, it will ultimately be more difficult for hackers to abuse this verification method than traditional document-based identity verification.”
The facial recognition-based selfie video identification method that Meta is testing will require the user to upload a selfie video which will then be processed using facial recognition technology to compare the video to the profile images of the account they are trying to access. .
Meta claims the method is comparable to identity verification used to unlock a phone or access other apps, such as Apple's FaceID on the iPhone. “As soon as someone uploads a selfie video, it will be encrypted and stored securely,” Bickert added. “It will never be seen on your profile, to your friends or to other people on Facebook or Instagram. We immediately delete any facial data generated after this comparison, regardless of whether there is a match or not.”
Conditioning users to upload and store a selfie video to verify their identity could be a way for Meta to expand its offerings in the digital identity space, if enough users choose to upload their biometric data.
No testing in the UK or EU for now
All of these facial recognition tests are being run globally, according to Meta. However, the company noted, rather conspicuously, that there is currently no testing taking place in the UK or European Union, where comprehensive data protection regulations apply. (In the specific case of biometrics for identity verification, the block's data protection framework requires the explicit consent of the data subjects for such a use case.)
With this in mind, Meta's tests appear to fit into a broader public relations strategy it has mounted in Europe in recent months to try to pressure local lawmakers to water down citizens' privacy protections. This time, the cause he invokes to push for unfettered data processing for AI is not a (self-serving) notion of data diversity or claims of lost economic growth but the simpler goal of combating fraudsters.
“We are engaging with the UK regulator, policymakers and other experts as testing progresses,” Meta spokesperson Andrew Devoy told TechCrunch. “We will continue to seek expert feedback and make adjustments as features evolve.”
However, while using facial recognition for limited security purposes may be acceptable to some (and indeed Meta could do so under existing data protection rules), using people's data to train business models of AI is something else.