On 6 March 2025, the eSafety Commissioner published the transparency report summarising Google's responses on tools, policies, and measures to detect and prevent terrorist and violent extremist (TVE) content and mitigate online radicalisation risks. The report evaluates Google's approaches, including proactive detection, user reporting, moderator resourcing, and risks associated with artificial intelligence (AI) and recommender systems. The report summarised Google’s policies regarding TVE content across YouTube, Drive, and Gemini. The report highlighted Google's enforcement measures including account bans, content removal, and restricted sharing. It was highlighted that Google employs hash-matching, classifiers, and human review to detect and remove TVE content. YouTube enforces graduated penalties, while Google Drive limits detection to publicly shared content. Gemini blocks harmful prompts using response classifiers and filters training data to remove high-risk content while ensuring the model can recognise harmful material. It was also highlighted that users can report violations through platform-specific mechanisms, and Google escalates credible threats to law enforcement. It was also stated that Google has committed to ongoing improvements in its moderation and safety tools.
Original source