On 12 March 2026, the Personal Data Protection Authority published a report on agentic Artificial Intelligence (AI), outlining the characteristics, potential uses, and risks of agentic AI systems and their implications for personal data protection. The report explains that agentic AI systems consist of AI agents capable of autonomously pursuing goals, coordinating multi-step tasks, and adapting to changing conditions, which distinguishes them from conventional AI systems that mainly respond to inputs within predefined rules. It notes that these systems may be applied in areas including research and development, customer support, finance, healthcare, and incident management, but may also create risks related to transparency, accountability, bias, security vulnerabilities, and the accuracy of outputs. The report emphasises that the multi-step and autonomous nature of agentic AI can complicate personal data processing by expanding the scope of data use, enabling inference-based profiling, and making oversight and legal responsibility more difficult. To mitigate these risks, the report recommends a risk-based and human-centred approach, including meaningful human oversight, transparency and explainability mechanisms, privacy by design and by default, clear allocation of roles and responsibilities, and the use of risk assessment tools, including data protection impact assessments.
Original source