On 29 September 2024, the Safe and Secure Innovation for Frontier Artificial Intelligence (AI) Models Act (SB 1047) was vetoed by the Governor of California. Under the Act, developers of covered models would determine, before starting the training of the model, whether their model falls under the limited duty exemption. This exemption is defined as "a determination (...) with respect to a covered model, that is not a derivative model, that a developer can reasonably exclude the possibility that the covered model has a hazardous capability (...) or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modification". Developers would need to comply with various requirements, including the ability to initiate a complete shutdown of the covered model promptly, models qualified for the limited duty exemption are excluded. Further, the Act would mandate that developers conduct performance benchmarking and provide the Frontier Model Division with a safety and security protocol as well as a certification specifying the basis for the limited duty exemption. The Act would also require developers to conduct regular audits of the safety and security protocol. According to the Governor, the Act focuses too narrowly on large AI models based on computational costs, creating a false sense of security. He emphasised that smaller models may pose significant risks and that the bill fails to address actual risks in high-stakes environments. The Governor's veto can be overridden by a two-third vote in both chambers of the legislature.
Original source