YouTube gives celebrities new tools to spot and challenge AI deepfakes
YouTube is expanding its AI likeness detection program to include celebrities, building on prior tests with creators and earlier rollouts for politicians and journalists. The system scans the platform for AI-generated videos that resemble enrolled public figures and flags those matches to the people who enroll in the program. This gives public figures a direct way to discover possible deepfakes and take action.
The flagged videos can be reviewed by the public figure, who may then request a takedown. YouTube evaluates removal requests against its existing privacy policy, so not every request will result in immediate removal — the review balances privacy and free expression. Still, the notification and takedown pathway add an important layer of oversight for celebrities' likenesses on the platform.
This expansion delivers several practical benefits:
- Enrolled celebrities gain proactive monitoring instead of relying on manual discovery.
- The system helps deter misuse by increasing the likelihood that abusive or misleading impersonations will be spotted and addressed.
- It scales a deployed protection tool rather than remaining an experimental pilot, signaling platforms can use AI to help police AI misuse.
While the feature isn't a cure-all — automated detection can miss borderline cases and policy reviews are required — it represents a positive, tangible step toward giving public figures more control over how AI is used to portray them online. As the tool matures and enrollment expands, it could meaningfully reduce the reach of harmful or deceptive AI-generated impersonations on YouTube.