Description

Department of Industry, Science, and Resources issued report on risks surrounding multi-agent AI systems

On 29 July 2025, the Department of Industry, Science, and Resources issued a report on the risks surrounding multi-agent AI systems. The report provides guidance for organisations assessing the risks of multi-agent AI systems with a focus on risk identification and analysis. The report also identified six failure patterns. First, cascading reliability failures manifest when agents' erratic competence and brittle generalisation failures are propagated and reinforced across the network. Second, inter-agent communication failures involve misinterpretation, information loss, or conversational loops that derail task completion. Third, monoculture collapse emerges when agents built on similar models exhibit correlated vulnerabilities to the same inputs or scenarios. Fourth, conformity bias drives agents to reinforce each other's errors rather than providing independent evaluation, creating dangerous false consensus. Fifth, a deficient theory of mind occurs when agents fail to incorporate correct assumptions about other agents' knowledge, goals, or behaviours, leading to coordination breakdowns. Sixth, mixed motive dynamics arise when agents pursuing individually rational objectives produce collectively suboptimal outcomes, even under unified governance.

Original source

Scope

Policy Area
Design and testing standards
Policy Instrument
Testing requirement
Regulated Economic Activity
cross-cutting
Implementation Level
national
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2025-07-29
concluded

On 29 July 2025, the Department of Industry, Science, and Resources issued a report on the risks su…