OpenAI opens GPT-5.5 Bio Bug Bounty to harden model safety
OpenAI's GPT-5.5 Bio Bug Bounty invites security researchers, red-teamers, and domain experts to probe the model for universal jailbreaks that could enable biological safety risks. By focusing on systemic vulnerabilities rather than one-off prompts, the program aims to uncover the most consequential failure modes and close gaps before they can be exploited.
The bounty program offers rewards up to $25,000 for validated findings, creating a clear incentive for experienced contributors to dedicate time and expertise. This market-based, transparent approach accelerates discovery of real-world weaknesses and channels those discoveries into fixes and mitigations.
OpenAI plans to use the submitted reports to strengthen protections, update alignment measures, and reduce the likelihood that GPT-5.5 could be misused in ways that pose biological risks. By collaborating with external experts, the company is prioritizing proactive, community-driven safety work that benefits everyone.
Researchers and teams interested in participating can follow the program's rules and submission guidelines to contribute responsibly. The bug bounty represents a positive step toward resilient AI deployment: it turns potential vulnerabilities into actionable improvements and signals a commitment to rigorous, community-engaged AI safety.