On 10 December 2025, Vietnam’s National Assembly adopted the Law on Artificial Intelligence (AI). The Law applies to Vietnamese and foreign entities operating in the country, excluding AI used solely for defence, security, or cipher purposes. It introduces a risk-based classification of AI systems into high, medium, and low-risk categories, with classification criteria to be set by the Government. Under Article 14, providers of high-risk systems must implement and maintain risk management measures, ensure the quality of training, testing, and operational data, and maintain technical dossiers and activity logs to support conformity assessment and post-use inspection. Systems must be designed to allow human supervision and intervention. Article 14 further requires deployers of high-risk systems to operate and supervise systems according to their classification, ensure safety, data security, and human intervention capability, comply with relevant standards, and fulfil transparency and accountability obligations toward authorities and users. Users must follow operating procedures, technical instructions, and safety measures, avoid unlawful interference, and report incidents promptly. Under Article 15, medium-risk systems require providers and deployers to ensure transparency and maintain accountability for intended use, system operation, input data, and risk management when requested by authorities, while users must comply with notification and labeling requirements. Low-risk systems are primarily managed through accountability to authorities in cases of law violations or impacts on rights, with users responsible for lawful use.
Original source