AI forces a security rethink — and opens the door to stronger defenses
MIT Technology Review's EmTech AI session made a clear, constructive point: the arrival of large-scale AI systems changes the rules of cybersecurity, but that challenge also creates huge opportunities. Rather than tacking AI onto existing defenses, speakers argued for embedding security into AI development and deployment—shifting from reactive patching to proactive, AI-aware design.
The discussion highlighted concrete areas where AI-native approaches can make a difference: robust model monitoring that detects anomalous behavior, secure pipelines that protect training and validation data, and tools that harden models against adversarial inputs. These capabilities turn AI from a new liability into a force-multiplier for defensive operations when done correctly.
Collaboration and standards were underscored as essential. Panelists called for shared benchmarks, threat intelligence exchange, and common governance frameworks so organizations can adopt best practices faster. Investment in tooling and workforce training will accelerate adoption and help protect more users and services as AI systems scale.
Framing the problem as an opportunity, the session left attendees with a positive takeaway: by designing security around AI rather than after it, industry and researchers can build more resilient ecosystems that unlock AI's benefits while minimizing harms.