Description

Responsible AI Safety and Education (RAISE) Act enters into force

On 1 January 2027, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act enters into force, including obligations imposed on frontier developers and large frontier developers. From this date, large frontier developers must have a published frontier AI framework in place, must have filed their disclosure statements with the Office of Financial Services, and must comply with the transparency reporting requirements, including publishing transparency reports before or alongside any new frontier model deployment. Incident reporting obligations also kick in on this date, requiring frontier developers to report critical safety incidents to the office within 72 hours or 24 hours where imminent risk of death or serious injury exists, and large frontier developers must begin submitting quarterly catastrophic risk assessment summaries to the office.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Testing requirement
Regulated Economic Activity
ML and AI development
Implementation Level
subnational
Government Branch
executive
Government Body
central government

Complete timeline of this policy change

Hide details
2026-01-08
under deliberation

On 8 January 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act wa…

2026-01-28
under deliberation

On 28 January 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act w…

2026-03-19
in grace period

On 19 March 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act ent…

2026-03-20
adopted

On 20 March 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act was…

2026-03-27
adopted

On 27 March 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act was…

2027-01-01
in force

On 1 January 2027, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act en…