On 17 April 2026, China's National Cybersecurity Standardisation Committee (TC260) opened a public consultation on draft Ethical Security Guidelines for Artificial Intelligence Applications 1.0 until 26 April 2026. The proposed guidelines establish ethical and safety principles for the development, deployment, and use of artificial intelligence systems. TC260 identifies five primary risk categories arising from AI applications, including weakening of human control, disruption of social order, social disengagement, discrimination and bias, and infringement of individual rights. To address these risks, the guidelines establish six core principles, namely people-centred design, safety and controllability, fairness and justice, transparency, collaborative governance, and inclusive benefit-sharing. The guidelines specify requirements for developers, including the maintenance of audit logs for design decisions, the establishment of incident recall mechanisms, and the implementation of privacy and fairness as default settings. The guidelines also require that AI applications not be developed with the primary objective of replacing human employment. Service providers are required to conduct ethics impact assessments, establish emergency intervention mechanisms, and provide users with clear mechanisms to refuse or cease AI use. Users are guided to maintain independent judgment, avoid over-reliance on AI systems, and refrain from using AI for deception, forgery, or harassment.
Original source