YouTube deepfake detection tool could see Google using creators’ faces to train AI bots: report

Experts are sounding the alarm over YouTube’s deepfake detection tool — a new safety feature that could allow Google to train its own AI bots with creators’ faces, according to a report.
The tool gives YouTube users the option to submit a video of their face so the platform can flag uploads that include unauthorized deepfakes of their likeness.
🎬 Get Free Netflix Logins
Claim your free working Netflix accounts for streaming in HD! Limited slots available for active users only.
- No subscription required
- Works on mobile, PC & smart TV
- Updated login details daily
Creators can then request that the AI-generated doppelgangers be taken down.
But the safety policy would also allow Google, which owns YouTube, to train its own AI models using biometric data from creators, CNBC reported Tuesday.
A YouTube spokesperson told CNBC the company has never used creators’ biometric data to train AI models, and that users’ likenesses are only used for identity verification purposes and deepfake detection.
The spokesperson said YouTube is reviewing the language in its policy sign-up to potentially clear up any confusion, though they added that the policy itself will not change.
Google did not immediately respond to The Post’s request for comment.
Tech giants have been struggling to rush out the latest AI models without losing online users’ trust.
In an effort to help creators tackle the unauthorized use of their likenesses, YouTube introduced a deepfake detection tool in October.
It is aiming to expand the feature’s rollout to the more than 3 million creators in the YouTube Partner Program by the end of January, Amjad Hanif, YouTube’s head of creator product, told CNBC.
To sign up for the tool, users must upload a government ID and a video of their face, which is used to scan through the hundreds of hours of new footage posted to YouTube every minute.
This biometric upload is subject to Google’s privacy policy, which states public content can be used “to help train Google’s AI models and build products and features like Google Translate, Gemini Apps, and Cloud AI capabilities,” CNBC noted.
Any videos flagged as potential deepfakes are sent to the creator, who can request that the footage be taken down.
Hanif said actual takedowns remain low because many creators are “happy to know that it’s there, but not really feel like it merits taking down.”
“By and far the most common action is to say, ‘I’ve looked at it, but I’m OK with it,’” he told CNBC.
But online safety experts said low takedown numbers are more likely due to a lack of clarity around the new safety feature – not because creators are comfortable with deepfakes.
Third-party companies like Vermillio and Loti said their work helping celebrities protect their likeness rights has ramped up as AI use becomes more widespread.
“As Google races to compete in AI and training data becomes strategic gold, creators need to think carefully about whether they want their face controlled by a platform rather than owned by themselves,” Vermillio CEO Dan Neely told CNBC.
“Your likeness will be one of the most valuable assets in the AI era, and once you give that control away, you may never get it back.”
Loti CEO Luke Arrigoni said the risks of YouTube’s current policy concerning biometric data “are enormous.”
Both executives said they would not advise any of their clients to sign up for YouTube’s deepfake detection tool.
YouTube creators like Mikhail Varshavski, a board-certified physician who goes by “Doctor Mike,” have seen more and more deepfake videos spreading online with the release of apps like OpenAI’s Sora and Google’s Veo 3.
Varshavski – who has racked up more than 14 million subscribers on YouTube over nearly a decade – regularly debunks health myths and reviews TV medical dramas for inaccuracies in his videos.
He said he first saw a deepfake of himself on TikTok, where he appeared to be promoting a “miracle” supplement.
“It obviously freaked me out, because I’ve spent over a decade investing in garnering the audience’s trust and telling them the truth and helping them make good health-care decisions,” he told CNBC.
“To see someone use my likeness in order to trick someone into buying something they don’t need or that can potentially hurt them, scared everything about me in that situation.”
Creators currently have no way to make money off the unauthorized use of their likeness in deepfake videos, including promotional content.
YouTube earlier this year gave creators the option to allow third-party firms to use their videos to train AI models, though they are not compensated in such instances, either.
Let’s be honest—no matter how stressful the day gets, a good viral video can instantly lift your mood. Whether it’s a funny pet doing something silly, a heartwarming moment between strangers, or a wild dance challenge, viral videos are what keep the internet fun and alive.