China: National Cybersecurity Standardisation Technical Committee adopted the Artificial Intelligence Safety Governance Framework (Version 2.0)

Description

National Cybersecurity Standardisation Technical Committee adopted the Artificial Intelligence Safety Governance Framework (Version 2.0)

On 15 September 2025, the National Cybersecurity Standardisation Technical Committee adopted the Artificial Intelligence (AI) Safety Governance Framework 2.0, establishing safety governance for AI applications across all sectors. The framework applies to AI developers, deployers, and operators throughout the Chinese economy, including technology companies and research institutions. It introduces a five-tier security risk classification system from "minimal" to "extremely serious" based on potential societal impact. The framework requires entities to classify their AI applications according to these risk levels and mandates regulatory authorities to develop industry-specific standards. The framework specifies obligations, including implementing eight trustworthy AI principles, including ensuring human control, respecting national sovereignty, and enhancing system transparency. Companies must establish safety guardrails, conduct risk assessments, maintain audit records, and implement human intervention mechanisms throughout AI system lifecycles.

Original source

Scope

Policy Area
Data governance
Policy Instrument
Cybersecurity regulation
Regulated Economic Activity
ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2025-09-15
adopted

On 15 September 2025, the National Cybersecurity Standardisation Technical Committee adopted the Ar…