On 6 March 2025, the eSafety Commissioner published the transparency report summarising Telegram's responses on tools, policies, and measures to detect and prevent terrorist and violent extremist (TVE) content and mitigate online radicalisation risks. The report evaluates Telegram's approaches, including proactive detection, user reporting, moderator resourcing, and risks associated with artificial intelligence (AI) and recommender systems. Telegram responded to its detection of known and new child sexual exploitation and abuse (CSEA) material. It was highlighted that for known CSEA images and videos, Telegram primarily used hash-matching tools but only for public groups, channels, and stories, while private chats and secret chats remained unmonitored. The company relied on internal databases of previously detected material rather than external sources, raising concerns about potential gaps in detection. For new CSEA material, Telegram employed AI/machine learning models but excluded private chats and channels. It also did not block links to CSEA material, citing a preference for AI-based classification over static blacklists.
Original source