On 9 February 2026, the Ministry of Science and Technology opened a consultation on classification criteria for high and medium-risk AI in Decree No. 2026/ND-CP implementing the Law on Artificial Intelligence. The decree establishes the classification criteria and management procedures for artificial intelligence (AI) systems, focusing on identifying high and medium-risk levels to ensure compliance obligations are proportionate to potential hazards. The policy aims to prevent and limit negative impacts on human rights, safety, and security across various sectors and scales of system deployment. The classification of high-risk AI systems is determined by five groups of criteria. First, impacts on fundamental human rights such as privacy and equality. Second, risks to safety, security, and public interest. Third, the field of use, particularly in essential sectors like healthcare, education, finance, and energy. Fourth, the degree of automation and human control. Fifth, the scale of impact, including the number of affected individuals. Conversely, medium-risk systems are identified by their interaction with humans without clear disclosure, the creation of content that could cause confusion regarding authenticity, or the use of deep synthesis techniques to simulate real voices or appearances. The decree introduces an exclusion mechanism where certain high-risk systems can be downgraded if they perform purely technical or administrative tasks, such as data entry or software testing, provided they do not involve individual profiling or automated prioritisation of plans for specific individuals. To facilitate compliance, the Ministry of Science and Technology (MoST) will operate an automated risk classification tool on a one-stop web portal. Suppliers are legally responsible for the accuracy of the information they input into the tool. Additionally, the decree mandates reviews and reclassification when changes occur in the system's function, purpose, or if new risks arise.
Original source