On 1 February 2026, the Bill for an Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (SB 24-205), including testing requirements, enters into force. The Bill includes measures for developers and deployers of high-risk Artificial Intelligence (AI) systems. The high-risk AI system is defined as systems developed or substantially modified to make consequential decisions that impact consumer's access to or the availability, cost, or terms of various aspects of their life, including criminal justice remedies, education, employment, essential goods or services, financial or lending services, government services, healthcare, housing, insurance, or legal services. In particular, the developers are required to provide deployers with information about the high-risk system, including the information necessary to conduct an impact assessment and issue public statements listing the types of high-risk systems developed or modified, along with details on known or potential risks of algorithmic discrimination and measures of addressing them. In addition, the developers have to notify the attorney general and known deployers of any discovered or anticipated algorithmic discrimination risks. Furthermore, the deployers have to develop a risk management policy, conduct impact assessments, inform consumers of consequential decisions, and publicly disclose system details while reporting discrimination discoveries to authorities. Moreover, before an AI system or model is marketed, deployed, or put into service, the developer must conduct extensive research, testing, and development. This testing should not be conducted under real-world conditions but should ensure the AI system's safety and compliance with relevant standards.
Original source