Description

Adopted AISI guide on evaluation perspectives in Artificial Intelligence safety

On 25 September 2024, the AI Safety Institute of Japan (AISI) adopted the guide on evaluation perspectives in Artificial Intelligence safety. The guide applies to organisations involved in the development and provision of AI systems. The guide emphasises key elements for AI safety evaluations, including human centricity, safety, fairness, privacy protection, ensuring security, and transparency. The guide's evaluations on AI Safety focus on determining the suitability of AI systems, particularly those incorporating large language models, from an AI safety perspective. The guide also highlights that evaluations on AI Safety are primarily carried out by managers involved in AI development and provision, and that these evaluations should be conducted repeatedly within a reasonable timeframe and at appropriate intervals.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Testing requirement
Regulated Economic Activity
ML and AI development
Implementation Level
national
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2024-09-25
adopted

On 25 September 2024, the AI Safety Institute of Japan (AISI) adopted the guide on evaluation persp…