On 1 May 2026, six international cybersecurity agencies, including the Australian Signals Directorate, the Cybersecurity and Infrastructure Security Agency, the Canadian Centre for Cyber Security, the National Cyber Security Centre, and the National Cyber Security Centre, published guidance on securing agentic artificial intelligence (AI) services. The guidance addresses the deployment of large language model-based agentic AI systems in government, critical infrastructure, and industry organisations and outlines security measures across the system lifecycle. It includes recommendations on oversight mechanisms, identity and access management, adversarial testing, red teaming, third-party component verification, threat modelling, governance policies, progressive deployment, system isolation, continuous monitoring, output validation, and human approval checkpoints. The guidance further states that agentic AI systems should be deployed only for defined, low-risk, and non-sensitive tasks, with least-privilege access controls, governance mechanisms, and human oversight.
Original source