China: Cyberspace Administration published batch of typical cases regarding investigations into online platforms over allegations of failure to label AI-generated content

Description

Cyberspace Administration published batch of typical cases regarding investigations into online platforms over allegations of failure to label AI-generated content

On 12 February 2026, the Cyberspace Administration (CAC) concluded several investigations into several online platforms regarding the dissemination of artificial intelligence (AI) generated content lacking mandatory identification labels. The CAC instructed websites and platforms to conduct investigations and rectifications, resulting in the handling of 13'421 accounts and the removal of over 543'000 pieces of information found to be in violation of regulations and service agreements. The investigation targeted content that utilised AI-generated and synthesised information to mislead the public through false narratives, which the CAC stated damaged the online ecosystem. Specific violations included the publication of fabricated stories, such as animal rescues and bomb disposals, without AI-generated tags to gain traffic. Furthermore, the CAC identified instances of AI face-swapping and voice cloning to impersonate public figures such as athletes and entrepreneurs for unauthorised profit and the spread of false information. The enforcement action also addressed the creation of fake fire scenes and the modification of animated characters to display vulgar, horrific, bloody, and violent content accessible to minors. Additionally, the CAC investigated accounts and e-commerce stores on platforms such as Xiaohongshu, Taobao, and Pinduoduo that provided software or tutorials for removing AI labels and watermarks. Following these actions, the CAC announced it would maintain a strict stance against misleading information without AI labels and called on content creators to proactively implement labels according to relevant regulations to maintain a clean and healthy online environment.

Original source

Scope

Policy Area
Content moderation
Policy Instrument
Content moderation regulation
Regulated Economic Activity
platform intermediary: user-generated content, platform intermediary: e-commerce, ML and AI development, search service provider, software provider: other software, messaging service provider
Implementation Level
national
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2026-02-12
in force

On 12 February 2026, the Cyberspace Administration (CAC) concluded several investigations into seve…