On 28 January 2026, the Responsible Artificial Intelligence (AI) Safety and Education (RAISE) Act was passed by the New York Senate. The Act aims to establish transparency and safety requirements for developers of frontier artificial intelligence models. The Act applies to frontier AI developers, particularly large developers with annual revenues exceeding USD 500 million, engaged in developing, deploying, or operating high-compute foundation models in New York. It imposes obligations, including the publication of frontier AI frameworks detailing risk assessment and mitigation processes, mandatory transparency reports before deployment, regular updates to safety frameworks, and prohibitions on misleading statements regarding risks. It further requires reporting of critical safety incidents within 72 hours or 24 hours where imminent harm is identified, periodic submission of internal risk assessments, and compliance with disclosure and registration requirements overseen by the Department of Financial Services, with enforcement through civil penalties of up to USD 1 million for initial violations and USD 3 million for subsequent violations. The Act also establishes reporting mechanisms, annual public safety summaries from 2028, and rulemaking authority for implementation.
Original source