On 23 December 2024, the Texas Responsible Artificial Intelligence Governance Act was introduced to the State House. The Act requires continuous performance monitoring for high-risk AI systems, obliging developers and deployers to establish mechanisms for regular evaluation of system performance with regard to accuracy, reliability, and alignment with intended use cases. Monitoring is required to identify any emergent issues, such as algorithmic discrimination, data misuse, or deviations from expected outcomes. The Act stipulates that deployers must implement safeguards to promptly address errors, anomalies, or risks identified during monitoring. Performance metrics and monitoring processes must adhere to recognised frameworks, such as the National Institute of Standards and Technology's (NIST) AI Risk Management Framework. Records of performance evaluations must be maintained for a minimum of three years after system deployment for accountability and regulatory compliance. In the event of significant modifications being made to the system, deployers are obliged to reassess its performance to ensure continued compliance with safety and transparency standards. Furthermore, consumers must be informed of monitoring practices when interacting with systems that affect their rights or decision-making processes.
Original source