On 23 December 2024, the Texas Responsible Artificial Intelligence Governance Act was introduced, establishing a framework for the testing of AI systems. Developers and deployers are obliged to conduct comprehensive testing of AI systems prior to deployment, with a focus on potential risks such as algorithmic discrimination and system bias. These tests must evaluate the system's accuracy, explainability, transparency, and reliability under intended use conditions. The evaluation process must incorporate specific metrics, such as those delineated in the National Institute of Standards and Technology's (NIST) AI Risk Management Framework. Furthermore, the assessment of training datasets must be incorporated into the testing process to identify and mitigate unlawful biases or discriminatory outcomes. Developers are required to provide detailed documentation of testing results and methodologies to deployers, ensuring systems are used as intended and within their known limitations. Post-deployment monitoring is mandatory to identify and address issues that arise after implementation. Furthermore, any significant modification to an AI system requires retesting to ensure continued compliance with the Act's standards.
Original source