On 9 December 2023, the Parliament and the Council of the European Union adopted a provisional agreement on the proposal on harmonised rules on artificial intelligence (AI Act). The compromise agreement clarifies the definition of an AI system by aligning it with the OECD's proposed approach, seeking to differentiate AI from simpler software systems. The AI Act specifically exempts systems exclusively utilised for military or defence purposes and those employed solely for research, innovation, or non-professional personal use. The Act includes testing requirements for so-called "high-risk AI systems" as part of the requirement to implement risk management measures for such systems (Art. 9). Specifically, developers must test high-risk AI systems in order to identify the most effective risk management measures, to ensure that such systems consistently perform their intended purpose, and that they comply with the Act's requirements. Tests should be performed prior to such systems being made available on the market, not going beyond what is necessary to achieve testing purposes, and remaining within defined testing metrics. Further, the Regulation provides for the possibility of conducting testing in real world conditions, i.e. outside of regulatory sandboxes, under certain conditions. Following the provisional agreement, the next steps involve finalising the document and submitting it to the European Union Parliament and Council for adoption.
Original source