On 9 December 2023, the Parliament and the Council of the European Union adopted a provisional agreement on the proposal on harmonized rules on artificial intelligence (AI Act), including quality of service requirements for "high-risk" AI systems. The AI Act is based on a risk-based management approach and would establish obligations for providers and users depending on the level of risk generated by the AI system, namely an "unacceptable" risk, "high-risk", and "low or minimal" risk. The compromise agreement clarifies the definition of an AI system by aligning it with the OECD's proposed approach, seeking to differentiate AI from simpler software systems. The AI Act specifically exempts systems exclusively utilised for military or defence purposes and those employed solely for research, innovation, or non-professional personal use. Under the provisional agreement, providers would be required to conduct an impact assessment on fundamental rights before deploying a high-risk AI system and making it available on the market. In terms of enforcement, the AI Act introduces fines for violations, calculated as a percentage of the offending company's global annual turnover. More generally, providers and deployers of certain AI systems, such as those intended to directly interact with natural persons, biometric categorisation or emotional recognition systems, and content generation or manipulation systems, are subject to certain transparency requirements. Following the provisional agreement, the next steps involve finalising the document and submitting it to the European Union Parliament and Council for adoption.
Original source