On 3 April 2026, the Saudi Data and Artificial Intelligence Authority (SDAIA) opened a consultation on the draft Responsible Artificial Intelligence Policy until 3 May 2026. The draft Policy applies to all government bodies, the private sector, the non-profit sector, and individuals who develop, use, or publish applications or solutions based on AI technologies in Saudi Arabia. The draft Policy would require all entities to conduct comprehensive safety tests before deploying a high-risk AI system or any version thereof. Required tests would include red team tests, continuous stress tests, bias evaluation, and security evaluation. Safety reports would be required to document the types of safety guardrails and tools used and the results of safety evaluations across all input and output types. High-risk AI systems would also be required to undergo annual third-party auditing accredited by the Authority.
Original source