Description

Singapore and Japan released joint testing report for large language model safety

On 11 February 2025, Singapore and Japan announced a Joint Testing Report, conducted under the purview of the AI Safety Institute (AISI) Network. This report is intended to evaluate the safety of Large Language Models (LLMs) across a variety of linguistic environments, with the aim of ascertaining the efficacy of AI safeguards in non-English languages. The testing process involved models in 10 languages across five harm categories, including privacy, crime, and intellectual property. This initiative is designed to address the limitations of AI safety that stem from its predominantly English-centric training, thereby contributing to the development of global evaluation standards for multilingual AI systems.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Testing requirement
Regulated Economic Activity
ML and AI development
Implementation Level
bi- or plurilateral agreement
Government Branch
executive
Government Body
data protection authority

Complete timeline of this policy change

Hide details
2025-02-11
concluded

On 11 February 2025, Singapore and Japan announced a Joint Testing Report, conducted under the purv…

We use cookies and other technologies to perform analytics on our website. By opting in, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platform. See our Privacy Policy to learn more about the use of data and your rights.