Description

Cyberspace Administration issued Methods for Identifying Synthetic Content Generated by Artificial Intelligence

On 14 March 2025, the Cyberspace Administration issued the Methods for Identifying Synthetic Content Generated by Artificial Intelligence (AI), introducing standardised practices for marking AI-generated content, including text, images, audio, video, and virtual scenes. AI service providers and businesses using synthetic media are required to implement both explicit and implicit content identifiers. Explicit identifiers include visible text, audio cues, or graphical labels, while implicit identifiers involve metadata embedded through technical measures. To support implementation, the regulation is accompanied by the National Standard GB/T (No.3, 2025), which sets out technical specifications for content marking, and the Practice Guide which provides coding rules for metadata implementation. The regulation applies to AI content generation platforms, content distribution services, and enterprises deploying synthetic media. Service providers must integrate these requirements into user agreements to ensure compliance and transparency. The Measures enter into force on 1 September 2025.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Design requirement
Regulated Economic Activity
ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
central government

Complete timeline of this policy change

Hide details
2025-03-07
in grace period

On 7 March 2025, the Cyberspace Administration adopted the Methods for Identifying Synthetic Conten…

2025-03-14
in grace period

On 14 March 2025, the Cyberspace Administration issued the Methods for Identifying Synthetic Conten…

2025-09-01
in force

On 1 September 2025, the Methods for Identifying Synthetic Content Generated by Artificial Intellig…