EU lawmakers give industry breathing room while outlawing exploitative 'nudify' apps
The European Parliament has taken a twin approach to responsible AI: delaying key parts of the landmark AI Act to avoid hurried compliance, while moving decisively to ban apps that generate non-consensual intimate images. The vote, approved by a large majority, pushes back compliance deadlines for high-risk systems to December 2027 and extends sector-specific timelines (for things like toys or medical devices) to around August 2028.
That delay is a pragmatic win. Regulators and developers now have more time to build, test, and certify safer systems instead of racing to meet a tight deadline. Lawmakers framed the postponements as a way to improve implementation, reduce mistakes, and ensure enforcement mechanisms are robust—outcomes that should lead to better protections for people and clearer obligations for companies.
Alongside the timeline changes, the Parliament backed proposals to ban so-called "nudify" apps—tools that synthesize intimate images of people without their consent. This move prioritizes privacy, dignity, and fundamental rights by targeting an invasive and harmful use of generative AI. Rules such as watermarking and other technical requirements were also adjusted to fit the new schedule, allowing policymakers to refine standards that genuinely curb misuse.
Overall, the combined steps signal constructive progress: the EU is balancing strong protections for citizens with realistic timetables for industry. The result should be safer AI deployments, clearer rules of the road for developers, and a firmer legal stance against technologies that enable exploitation.