How a new era of tools is helping people fight back
When someone finds their face or body used without consent in AI-generated pornography, the psychological and practical fallout can be devastating — from reputational damage to ongoing harassment. Stories like Jennifer’s have helped spotlight the human cost of manipulated intimate imagery and pushed technologists, platforms, and advocates to act.
Real progress is emerging in several complementary areas. AI-powered detection systems can flag likely deepfakes at scale, reverse-image and facial-matching services help victims find where images are reposted, and fingerprinting or perceptual-hash techniques make it harder for illegal content to resurface after removal. At the same time, many platforms are streamlining takedowns and using existing copyright or abuse mechanisms to give victims faster recourse.
Beyond tech, advocacy groups and specialized legal services are expanding support for survivors, helping them navigate takedown requests and pursue remedies. That combination — faster detection, stronger platform responses, and legal/advocacy support — is already reducing the visibility and spread of nonconsensual deepfakes for many people, even as attackers adapt.
There’s more work to do, but the trend is hopeful: coordinated efforts across research, industry, and civil society are turning what once felt like an intractable problem into one where victims can increasingly reclaim control. Practical steps anyone can take include using image-matching tools, promptly reporting content to platforms, and connecting with organizations that specialize in supporting survivors.
- Use image- and video-matching services to locate and document misuse.
- Report content immediately to platforms and use copyright or abuse-report channels when available.
- Seek support from advocacy groups that help with takedowns and legal options.
- Support continued investment in detection research, watermarking, and platform safeguards.