On 28 February 2024, the Federal Law Regulating Artificial Intelligence, including testing requirements, was introduced to the Senate of Mexico. The proposed law aims to establish a legal framework to regulate the development, deployment and use of artificial intelligence (AI) systems in the country. The law classifies artificial intelligence (AI) systems according to their level of risk into "unacceptable", "high", and "low or minimal". The high-risk artificial intelligence systems are defined as systems capable of causing harm to individuals' health or safety, infringing upon human rights or being used for purposes such as remote biometric identification in private spaces, management of utilities, educational access and evaluation, worker selection and monitoring, assessment for benefits and social programs, economic solvency evaluation, emergency response prioritisation, crime risk assessment, criminal investigation support, migration and border control management, and influencing political-electoral preferences without clear disclosure. The systems classified as high-risk would be required to undergo assessment and human oversight testing specified by the relevant authority prior to their market introduction or deployment.
Original source