On 13 October 2025, the Chamber of Deputies passed the Bill regulating Artificial Intelligence (AI) systems, including design requirements. The Bill requires AI systems to be designed to allow for human control and monitoring, and must be technically robust and resilient to minimise damage from failures or attacks. AI systems must also be designed to be transparent, ensuring their outputs are understandable and explainable to the people they affect. Systems that interact with humans must be clearly identified as artificial agents, and AI that generates synthetic content, including audio or video, must have outputs that are identifiable as artificially created. The Bill also provides that the use of an AI system will be considered high risk when it presents a significant risk of affecting fundamental rights. High-risk AI systems require a continuous, iterative risk management process throughout their entire lifecycle. The design of high-risk systems must incorporate strong data governance, security standards, and detailed technical documentation. It also provides that high-risk AI must include built-in logging functions to record operational and security events for auditing. It also provides that the design of all AI systems is explicitly prohibited from enabling specific harmful uses, including subliminal manipulation or real-time remote biometric identification in public spaces.
Original source