On 24 March 2025, the Virginia Governor vetoed the Act on the development, deployment, and use of high-risk artificial intelligence (AI). The Act would have outlined the responsibilities of developers and deployers of AI. In particular, it would have required developers to conduct pre-deployment testing, including bias detection, performance evaluation, and red-teaming. It would also have required developers to update their disclosures within 90 days of any substantial modification to the AI system. Deployers would have had to conduct ongoing impact assessments and post-deployment monitoring to track system performance and algorithmic discrimination. In addition, AI systems would have been required to comply with recognised risk management frameworks such as NIST AI Risk Management or ISO/IEC 42001. The Act would have exempted certain technologies and activities, including anti-fraud systems and research, and would have placed enforcement authority with the Attorney General, with penalties for violations ranging from USD 1,000 to USD 10,000. The Act would have entered into force on 1 July 2026.
Original source