On 29 September 2024, the Safe and Secure Innovation for Frontier Artificial Intelligence (AI) Models Act (SB 1047) was vetoed by the Governor of California. The Act would regulate the development and deployment of advanced AI models to ensure public safety and security. Specifically, the Act would mandate strict transparency and accountability measures in AI development, including periodic reevaluation of safety procedures and annual compliance certifications. Among others, the Act would introduce requirements for computing cluster operators and consultations with advisory committees to ensure AI innovations are safe, secure, and equitable. Further, the Act would mandate that developers conduct performance benchmarking and provide the Frontier Model Division with a safety and security protocol as well as a certification specifying the basis for the limited duty exemption. The Act would also require developers to conduct regular audits of the safety and security protocol. According to the Governor the Act focuses too narrowly on large AI models based on computational costs, creating a false sense of security. He emphasised that smaller models may pose significant risks and that the bill fails to address actual risks in high-stakes environments. The Governor's veto can be overridden by a two-third vote in both chambers of the legislature.
Original source