On 15 September 2025, the National Cybersecurity Standardisation Technical Committee adopted the Artificial Intelligence (AI) Safety Governance Framework 2.0, establishing safety governance for AI applications across all sectors. The framework applies to AI developers, deployers, and operators throughout the Chinese economy, including technology companies and research institutions. It introduces a five-tier security risk classification system from "minimal" to "extremely serious" based on potential societal impact. The framework requires entities to classify their AI applications according to these risk levels and mandates regulatory authorities to develop industry-specific standards. The framework specifies obligations, including implementing eight trustworthy AI principles, including ensuring human control, respecting national sovereignty, and enhancing system transparency. Companies must establish safety guardrails, conduct risk assessments, maintain audit records, and implement human intervention mechanisms throughout AI system lifecycles.
Original source