Australia: eSafety Commissioner registered Social Media Services (Core Features) Online Safety Code (Class 1C and Class 2 Material)

Description

eSafety Commissioner registered Social Media Services (Core Features) Online Safety Code (Class 1C and Class 2 Material)

On 9 September 2025, the eSafety Commissioner registered the Social Media Services (Core Features) Online Safety Code (Class 1C and Class 2 Material). The Code, outlined in Schedule 4 of Australia’s Online Safety Code, sets compliance requirements for social media platforms serving Australian users, excluding messaging features. The framework establishes a three-tier risk assessment system based on the likelihood that Australian children will encounter harmful content. Platforms must evaluate their risk profile for online pornography, self-harm material, high-impact violence material, and simulated gambling material, and classify services as Tier 1 (high risk), Tier 2 (moderate risk), or Tier 3 (low risk) for each content type. Certain enterprise-focused platforms with limited social networking capabilities are automatically classified as Tier 3 and do not require assessment. Risk assessments must take into account the platform’s terms of use, user demographics, functionality, scale of Australian users, and safety design features. Compliance measures vary according to whether harmful content is permitted and the assigned risk tier. Platforms allowing harmful content must implement age assurance measures, safety tools such as content filtering and blocking, and provide clear guidance to users on available protections. Platforms that prohibit harmful content but are classified as Tier 1 or 2 must deploy detection and removal systems, including machine learning or AI-based solutions. All applicable services are required to maintain clear terms and conditions, operate trust and safety functions, provide reporting and complaint mechanisms, and conduct annual system reviews. Platforms must also engage with safety organisations, publish information on eSafety’s role, and maintain dedicated online safety resources. The Code includes specific provisions for AI companion chatbot features, requiring additional risk assessments for generative AI restricted categories. These features face strict requirements, including mandatory age assurance for Tier 1 profiles and either age controls or content prevention systems for Tier 2 profiles. Enforcement includes mandatory reporting to eSafety, with annual reports for services allowing harmful content and on-demand reporting for others. The Code provides a comprehensive framework aimed at protecting Australian children from harmful online material while balancing platform functionality with safety obligations.

Original source

Scope

Policy Area
Content moderation
Policy Instrument
Content moderation regulation
Regulated Economic Activity
platform intermediary: user-generated content
Implementation Level
national
Government Branch
executive
Government Body
other regulatory body

Complete timeline of this policy change

Hide details
2025-09-09
adopted

On 9 September 2025, the eSafety Commissioner registered the Social Media Services (Core Features) …

2026-03-09
in force

On 9 March 2026, the Social Media Services (Core Features) Online Safety Code (Class 1C and Class 2…