ResearchTuesday, May 5, 2026· 2 min read

Google, Microsoft and xAI Agree to US Government Pre-Deployment AI Reviews

Source: The Verge AI

TL;DR

Google DeepMind, Microsoft, and Elon Musk's xAI have agreed to let the Commerce Department's CAISI review their frontier AI models before public release. The move expands an existing safety review program (which has reviewed OpenAI and Anthropic) and strengthens pre-deployment safety checks, transparency, and targeted research into model capabilities.

Key Takeaways

  • 1Major AI firms will allow the Commerce Department's Center for AI Standards and Innovation (CAISI) to perform pre-deployment reviews of new frontier models.
  • 2CAISI has already completed about 40 reviews and previously worked with OpenAI and Anthropic, showing a growing safety pipeline.
  • 3Reviews include pre-deployment evaluations and targeted research to better assess frontier AI capabilities and potential national-security risks.
  • 4Industry-government collaboration boosts transparency, improves safety checks before public release, and helps build public trust.

Major AI firms join government-led safety reviews

Google DeepMind, Microsoft, and Elon Musk's xAI have agreed to let the Commerce Department's Center for AI Standards and Innovation (CAISI) review their new frontier AI models prior to public release. CAISI will perform pre-deployment evaluations and targeted research to better assess capabilities and potential risks, extending a safety pathway the center has been building with other industry players.

CAISI began evaluating models from OpenAI and Anthropic in 2024 and has completed roughly 40 reviews so far. Adding these three large developers broadens the program's reach and creates a more consistent, shared approach to safety testing across the companies building the most advanced models.

Benefits of the collaboration include:

  • Stronger pre-release safety checks to reduce harms at deployment;
  • Targeted research that improves understanding of frontier capabilities and weaknesses;
  • Increased transparency and coordination between industry and government, which can boost public trust and inform sensible policy.

By scaling CAISI's review work to include more major developers, the industry takes a practical step toward safer rollouts of powerful models. The agreement signals a constructive model of cooperation where government testing and company innovation work together to deliver safer AI to the public.

Get AI Wins in Your Inbox

The best positive AI stories delivered to your inbox. No spam, unsubscribe anytime.