On 22 January 2026, the Ministry of Science and ICT released the artificial intelligence (AI) safety assurance guidelines under Article 32 of the Artificial Intelligence Basic Act and its Enforcement Decree. The Guidelines provide structured technical and procedural guidance on the implementation of statutory safety assurance obligations for applicable artificial intelligence systems. They address the determination of applicability and responsible entities, systematic identification, evaluation, and mitigation of reasonably foreseeable risks across the artificial intelligence lifecycle, and establishment of continuous monitoring mechanisms. The Guidelines also set out requirements for safety incident response, including internal escalation and external reporting, and procedures for submission of safety assurance results to the competent authority. They clarify that the Guidelines function as a reference instrument to support compliance with statutory and subordinate legislation and may be revised to reflect legal, technological, or risk-related developments.
Original source