On 6 December 2022, the Council of the European Union adopted its general approach on the Artificial Intelligence Act (AI Act), which includes testing requirements for so-called "high-risk AI systems" as part of the requirement to implement risk management measures for such systems (Art. 9). Specifically, developers must test high-risk AI systems in order to identify the most effective risk management measures, to ensure that such systems consistently perform their intended purpose, and that they comply with the Act's requirements. Tests should be performed prior to such systems being made available on the market, not going beyond what is necessary to achieve testing purposes, and remaining within defined testing metrics. Further, the general approach adds provisions providing for the possibility of conducting testing in real world conditions, i.e. outside of regulatory sandboxes, under certain conditions.
Original source