On 6 March 2025, the eSafety Commissioner published the transparency report summarising WhatsApp's responses on tools, policies, and measures to detect and prevent terrorist and violent extremist (TVE) content and mitigate online radicalisation risks. The report evaluates WhatsApp's approaches, including proactive detection, user reporting, moderator resourcing, and risks associated with artificial intelligence (AI) and recommender systems. The report highlights WhatsApp's enforcement and detection of TVE content across private messaging and channels. WhatsApp defines TVE material as content supporting designated organisations or individuals and enforces bans, account strikes, and group suspensions, with stricter measures for channels. It was highlighted that users, regulators, and law enforcement can report TVE content, while proactive detection relies on hash-matching for images and videos and text classifiers for written content. Third-party vendors assist in monitoring off-platform activity, and all AI-detected cases undergo human review, though over 50% of bans were overturned on appeal. The report also highlighted concerns over enforcement inconsistencies and the lack of pre-implemented safety measures in channels, suggesting potential regulatory scrutiny.
Original source