Singapore: Ministry of Health and Health Sciences Authority adopted Artificial Intelligence in Healthcare Guidelines (AIHGle 2.0)

Description

Ministry of Health and Health Sciences Authority adopted Artificial Intelligence in Healthcare Guidelines (AIHGle 2.0)

On 10 March 2026, the Ministry of Health (MOH) and the Health Sciences Authority (HSA) launched the revised Artificial Intelligence in Healthcare Guidelines (AIHGle 2.0). The guidelines aim to support patient safety and promote trust in the use of artificial intelligence (AI) in the healthcare sector. AIHGle 2.0 focuses on complex AI systems, including machine learning (ML) and deep learning (DL) models, as well as generative AI (GenAI) applications, which may present risks due to their complexity and the potential for model drift. The guidelines set out recommendations for developers, deployers, and users. Developers are expected to manage AI solutions through a Total Product Lifecycle (TPLC) approach, covering risk assessment, software validation, and post-market surveillance. They are also expected to provide clear and accurate information to healthcare partners on aspects such as system limitations, datasets, algorithms, and intended operating contexts. Healthcare organisations acting as deployers are expected to establish internal governance arrangements to oversee the use of AI systems, assess whether solutions are fit for purpose, and maintain a registry of deployed tools. They are also expected to provide guidance on the safe deployment of AI systems and ensure that cybersecurity and data protection requirements are met. The guidelines recommend that deployers adopt a risk-based approach to determine deployment models and governance measures proportionate to potential patient harm. They are further encouraged to test and validate AI systems prior to deployment, provide staff training, monitor performance periodically, and prepare adverse event response plans where appropriate. In addition, deployers are encouraged to establish communication mechanisms that support patient understanding and decision-making regarding the use of AI in medical management. Healthcare professionals acting as users remain responsible for maintaining professional standards of care when using AI-supported tools. They are expected to assess the accuracy and suitability of AI inputs and outputs, participate in relevant training and monitoring processes, respond to adverse events where necessary, and communicate transparently with patients about the use of AI in care delivery.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Artificial Intelligence authority governance
Regulated Economic Activity
ML and AI development, other service provider, technological consumer goods
Implementation Level
national
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2026-03-10
adopted

On 10 March 2026, the Ministry of Health (MOH) and the Health Sciences Authority (HSA) launched the…