Description

Six international cybersecurity agencies published guidance on careful adoption of agentic AI services

On 1 May 2026, six international cybersecurity agencies, including the Australian Signals Directorate, the Cybersecurity and Infrastructure Security Agency, the Canadian Centre for Cyber Security, the National Cyber Security Centre, and the National Cyber Security Centre, published guidance on securing agentic artificial intelligence (AI) services. The guidance addresses the deployment of large language model-based agentic AI systems in government, critical infrastructure, and industry organisations and outlines security measures across the system lifecycle. It includes recommendations on oversight mechanisms, identity and access management, adversarial testing, red teaming, third-party component verification, threat modelling, governance policies, progressive deployment, system isolation, continuous monitoring, output validation, and human approval checkpoints. The guidance further states that agentic AI systems should be deployed only for defined, low-risk, and non-sensitive tasks, with least-privilege access controls, governance mechanisms, and human oversight.

Original source

Scope

Policy Area
Data governance
Policy Instrument
Cybersecurity regulation
Regulated Economic Activity
ML and AI development
Implementation Level
bi- or plurilateral agreement
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2026-05-01
adopted

On 1 May 2026, six international cybersecurity agencies, including the Australian Signals Directora…