On 6 March 2025, the eSafety Commissioner published the transparency report summarising Meta's responses on tools, policies, and measures to detect and prevent terrorist and violent extremist (TVE) content and mitigate online radicalisation risks. The report evaluates Meta's approaches, including proactive detection, user reporting, moderator resourcing, and risks associated with artificial intelligence (AI) and recommender systems. The report highlighted Meta's handling of TVE content across Facebook, Messenger and Instagram, including Threads. It was highlighted that Meta's content falls under Meta’s ‘Dangerous Organisations and Individuals’ policy, with enforcement ranging from content removal to permanent bans. Detection relies on hash-matching tools and classifiers, with proactive measures excluding end-to-end encrypted chats. Appeals mostly concern content removals, with few reversals. It was also stated that volunteer moderators are not informed of TVE-related bans, and platform-wide enforcement remains inconsistent. It was also highlighted that Meta prioritises removal over limiting amplification and has not conducted a dedicated TVE safety assessment for its AI or encryption rollout.
Original source