On 3 April 2026, the Saudi Data and Artificial Intelligence Authority (SDAIA) opened a consultation on the draft Responsible Artificial Intelligence Policy until 3 May 2026. The draft Policy applies to all government bodies, the private sector, the non-profit sector, and individuals who develop, use, or publish applications or solutions based on AI technologies in Saudi Arabia. The draft Policy would designate SDAIA as the authority responsible for overseeing AI systems in Saudi Arabia. SDAIA will establish a general AI risk classification framework comprising four risk tiers, namely critical, high, limited and minimal or negligible. It will publish a register classifying AI systems by risk level, accredit impact assessments, respond to incident reports and coordinate with relevant national and international bodies. The release of critical-risk AI systems would be prohibited. High-risk AI systems would not be permitted for release until safety reports and technical safety test results were submitted to and accredited by SDAIA. SDAIA would also operate an AI Regulatory Sandbox and would be authorised to issue suspension orders for critical or high-risk systems and to take deterrent measures against non-compliant providers. Mandatory compliance would begin in the first phase with large companies and entities operating in high-risk sectors.
Original source