On 7 July 2025, the Infocomm Media Development Authority (IMDA) and the AI Verify Foundation (AIVF) formally launched the Global Artificial Intelligence (AI) Assurance Sandbox as a continuation of the pilot initiative introduced in February 2025. The Sandbox serves as a global platform for builders or deployers of generative AI (GenAI) applications, not the underlying foundation models, to conduct technical testing in collaboration with specialist testers. The pilot phase paired 17 AI deployers with 16 specialist technical testing vendors worldwide, generating practical insights into both testing methodologies and risk dimensions. The Sandbox now offers participants practical guidance on what and how to test, access to experienced testing partners, and opportunities to contribute to the development of technical testing standards, supporting the growth of a global AI assurance market. Eligible applications must involve large language or multi-modal models, be live or intended for live deployment, and make a novel contribution to the IMDA/AIVF body of knowledge. Testing focuses on issues including hallucination, undesirable content, data disclosure, adversarial prompt vulnerability, and broader use-case risks such as safety, financial impact, trust, regulatory compliance, and human oversight. Each testing cycle lasts up to three months and results in a public case study or report. While the Sandbox does not provide a software testing environment or regulatory approval, it integrates IMDA’s Safety Testing Starter Kit for Large Language Models (LLMs) and aligns with Singapore’s practical, risk-based approach to responsible AI governance.
Original source