On 28 August 2024, the Safe and Secure Innovation for Frontier Artificial Intelligence (AI) Models Act (SB 1047) was adopted by the California Assembly. Under the Act, developers of covered models may determine, before starting the training of the model, whether their model falls under the limited duty exemption. This exemption is defined as "a determination (...) with respect to a covered model, that is not a derivative model, that a developer can reasonably exclude the possibility that the covered model has a hazardous capability (...) or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modification". Developers must comply with various requirements, including the ability to initiate a complete shutdown of the covered model promptly, models qualified for the limited duty exemption are excluded. Further, the Act mandates that developers conduct performance benchmarking and provide the Frontier Model Division with a safety and security protocol as well as a certification specifying the basis for the limited duty exemption. The Act also requires developers to conduct regular audits of the safety and security protocol.
Original source