On 7 November 2025, the European Commission closes the consultation on its draft guidance and reporting template on serious AI incidents under Article 73 of Regulation (EU) 2024/1689 (Artificial Intelligence Act). The consultation aims to gather feedback on the proposed guidance, which specifies reporting obligations for providers of high-risk AI systems in cases of serious incidents or widespread infringements. These obligations are designed to create an early warning system, ensure accountability, enable timely corrective measures, and support transparency in high-risk AI system operations. The guidance defines “serious incidents” as events or malfunctions leading directly or indirectly to consequences such as death or serious harm to health, serious and irreversible disruption of critical infrastructure, infringement of fundamental rights, or serious harm to property or the environment. It also addresses “widespread infringements”, defined as acts or omissions contrary to Union law protecting collective interests, harming or likely to harm individuals across multiple Member States. The guidance specifies reporting timelines, investigation obligations for providers, and cooperation requirements with market surveillance authorities, while clarifying the interplay with other Union incident reporting obligations, such as the Critical Entities Resilience Directive (CER), the NIS2 Directive, and the Digital Operational Resilience Act (DORA), noting that for certain high-risk AI systems already covered by equivalent reporting regimes, the AI Act's reporting obligation is limited to infringements of fundamental rights.
Original source