United States of America: Center for AI Standards and Innovation signed frontier AI national security testing agreements with Google DeepMind, Microsoft and xAI

Description

Center for AI Standards and Innovation signed frontier AI national security testing agreements with Google DeepMind, Microsoft and xAI

On 5 May 2026, the Center for AI Standards and Innovation (CAISI) at the National Institute of Standards and Technology (NIST) signed agreements with Google DeepMind, Microsoft, and xAI to conduct pre-deployment evaluations and targeted research on frontier AI capabilities and AI security. The agreements enable government evaluation of AI models before public release, as well as post-deployment assessment. Developers may provide models with reduced or removed safeguards for national security-related capability and risk evaluation. Evaluators from across the US government may participate and provide feedback through the TRAINS Taskforce, an interagency group focused on AI national security concerns. The agreements support testing in classified environments and information-sharing to drive voluntary product improvements. They build on previously announced partnerships, renegotiated to reflect CAISI's directives from the Secretary of Commerce under America's AI Action Plan. To date, CAISI has completed more than 40 evaluations, including on unreleased models.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Testing requirement
Regulated Economic Activity
ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
central government

Complete timeline of this policy change

Hide details
2026-05-05
adopted

On 5 May 2026, the Center for AI Standards and Innovation (CAISI) at the National Institute of Stan…