On 12 June 2025, the New York State Legislature adopted the Responsible Artificial Intelligence (AI) Safety and Education Act (RAISE Act). The Act imposes obligations for large developers of high-risk AI systems, referred to as frontier models. The Act requires that before deployment, developers must create a written safety and security protocol, keep an unredacted version for at least five years post-deployment, and publish a redacted version while sharing it with the Attorney General. They must also retain detailed testing records, prevent unreasonable risks of critical harm, and review their protocols annually to reflect changes in model capabilities or best practices. Independent third-party audits of compliance must be conducted yearly, with unredacted audit reports kept on record and redacted versions submitted to the Attorney General. Developers must report computing costs annually and disclose any safety incidents within 72 hours. Employees, including contractors and advisors, are protected from retaliation when reporting risk and must be informed of their rights. Civil penalties for non-compliance may reach 15% of computed costs or USD 10,000 per affected employee. The Act prohibits contractual waivers of liability and allows courts to pierce corporate structures that deliberately evade responsibility.
Original source