Description

Adopted AI safety evaluation platform

On 10 May 2024, the United Kingdom Artificial Intelligence (AI) Safety Institute launched an AI safety testing platform aimed at enhancing AI safety evaluations. The platform was released under an open-source license, which allows testers to assess AI models and produce scores based on their capabilities. The platform aims to accelerate the development of safer AI models by supporting collaboration among researchers and developers worldwide.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Testing requirement
Regulated Economic Activity
ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2024-05-10
adopted

On 10 May 2024, the United Kingdom Artificial Intelligence (AI) Safety Institute launched an AI saf…

We use cookies and other technologies to perform analytics on our website. By opting in, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platform. See our Privacy Policy to learn more about the use of data and your rights.