Every two months, we provide a roundup of global developments on online safety, structured along our common language. To support policymakers, we document which government authorities are tackling which aspect of online safety, and how,

Every two months, we provide a roundup of global developments on online safety, structured along our common language. To support policymakers, we document which government authorities are tackling which aspect of online safety, and how,
This roundup provides policymakers working on online safety with a structured overview of international developments. We summarise insights from the Digital Policy Alert’s (DPA) daily tracking of developments worldwide, structured along our common language for online safety. Every finding links to the DPA entry, providing an extensive summary and the official government source. Users can filter and download the full DPA dataset and subscribe to our tailored notification service for free.
We focus on developments between 15 June and 15 August 2025, in four areas:
Access restrictions, including the United States’ postponed TikTok ban, the implementation of Australia’s social media ban, and the German designation of DeepSeek as illegal content.
Children’s data protection, including Vietnam’s adopted Law on Personal Data Protection, codes on children’s privacy in Australia and Canada, and the United Kingdom’s Data (Use and Access) Bill and fine against TikTok.
Online content rules, including the implementation of the United Kingdom’s Online Safety Act, the European Union’s frameworks on political advertising, disinformation, and child protection, and designations under Japan’s Information Distribution Platform Regulation Act.
Consumer protection, including China’s rules on “Minor Mode” and live e-commerce, the European Union’s consultation on the Digital Fairness Act, and the United States Federal Trade Commission's vacated Negative Option Rule.
Access restrictions are a strict, but increasingly common, regulatory instrument. We distinguish between bans, restricting access for all users, and age gates, restricting access for users below a certain age. Governments applied bans to both specific products and ranges of services:
The President of the United States extended the enforcement pause of the Protecting Americans from Foreign Adversary Controlled Applications Act, which requires ByteDance to divest TikTok or face a ban in the United States.
Several German data protection commissioners issued a notice to Apple and Google, highlighting that Deepseek AI constitutes illegal content under the Digital Services Act, due to unlawful data transfers to China, and is to be removed from app stores.
Brazil’s data protection authority upheld a ban on Tools for Humanity, specifically regarding its financial incentives for users to provide biometric data for its “World ID”.
Sweden expanded the criminalisation of the purchase of sexual services to include remote and digital means.
Two proposals prohibiting online gambling were introduced to the Philippines’ Senate.
Illinois enacted a ban on therapy services, including those delivered via AI, without a licence.
Beyond bans, several governments demanded age verification to restrict access for users below a given age threshold, focusing on social media and adult content websites:
Australia’s Minister for Communications issued the Online Safety (Age-Restricted Social Media Platforms) Rules, 2025. By December 2025, “age-restricted” social media platforms such as Facebook, Instagram, Snapchat, TikTok, X, and YouTube must prevent access for users under 16, under the Social Media Minimum Age Act.
A Bill banning users under 18 from social media platforms was introduced in the Philippines.
The United States Supreme Court upheld the constitutionality of a Texas Bill requiring online publishers of adult content to restrict access for minors. Florida‘s Attorney General filed multiple lawsuits against adult content providers for failing to implement age verification, namely WebGroup, NKL, Sonesta, GGW, and Traffic F.
The French Council of State rejected a request to halt an order requiring access restrictions for minors on adult content websites, without ruling on its constitutionality. The Regulatory Authority for Audiovisual and Digital Communication subsequently issued compliance notices to several adult content websites.
A Bill mandating access restrictions for minors on adult content websites was introduced to the Brazilian Chamber of Deputies.
In addition to access restrictions, several governments demanded age verification to establish tailored safeguards for users below age thresholds. Here, we outline one framework with safeguards spanning across online content, children’s data, and consumer protection. The sections below focus on each of these policy areas and open with developments related to children.
The United Kingdom is currently implementing the 2023 Online Safety Act. The Office of Communication (OfCom) issued a range of codes of practice and guidance to support compliance with obligations regarding the protection of children and illegal content.
Regarding the protection of children, Ofcom’s Protection of Children Code of Practice for user-to-user services and search services entered into force in July 2025. Providers must assess whether children access their services and, if so, conduct a children's risk assessment. Safety measures must then address said risks, including “highly effective age assurance” measures to restrict access to content such as pornography and self-harm. To support implementation, Ofcom issued guidance on content harmful to children, children’s risk profiles and register of risks, and updated guidance on age assurance and children's access assessment. Currently, Ofcom is holding consultations on amendments to the code for user-to-user services and to guidance on age assurance, and age verification for adult content services.
Regarding illegal content, Ofcom’s Illegal Content Code of Practice for user-to-user services and search services took effect in March 2025. Providers must remove illegal content once aware of it and implement measures to reduce the risk of showing “priority criminal content”, including child sexual exploitation, human trafficking, and terrorism, among others. Ofcom issued guidance on illegal content risk assessment, record-keeping, and content moderation standards and established a register of risks. Ofcom is currently considering amendments to the codes of practice for both user-to-user services and search services, and considering the extension of the codes to smaller platforms to expand user controls. Ofcom recently also amended guidance on illegal content judgements and proactive technology measures. Finally, Ofcom opened an investigation into an online suicide forum for failing to assess illegal content risks, requested social media providers to document illegal content risks, and opened a consultation on illegal content enforcement notices.
Finally, Ofcom is developing additional obligations for categorised services, with consultations on codes and guidance expected in early 2026. The regulations establishing category thresholds, which entered into force in February, were unsuccessfully challenged by the Wikimedia Foundation in court.
To protect children’s data, governments impose both restrictions on data processing and safeguards during data processing. Governments rarely restricted the processing of minors’ data:
In Brazil, a Bill to protect children online was introduced to the Chamber of Deputies. It prohibits profiling for targeted advertising to children and adolescents.
The Connecticut Act concerning Social Media and Online Services, Products, and Features entered into force. It prohibits the processing of minors' personal data for targeted advertising, sales, or profiling, among other practices.
Most governments focused on parental approvals and consent before processing minors’ data:
Vietnam adopted the Law on Personal Data Protection, requiring the legal representative’s consent for children’s data processing, which can be withdrawn at any time.
The United States Federal Trade Commission amended the Children’s Online Privacy Protection Rule. The rule requires operators to obtain parental consent before collecting, using, or sharing children’s personal information, including for targeted advertising.
A court in the United Kingdom upheld the Information Commissioner's GBP 12.7 million fine against TikTok for processing data of children under 13 without parental consent.
The Australian Information Commissioner closed the consultation on the Children's Online Privacy Code, clarifying how the Australian Privacy Principles apply to children’s data. The Code addresses how providers can obtain “genuine consent” from children and parents.
Draft amendments to the Colorado Privacy Act Rules were filed. The amendments would require controllers to obtain consent from the child or parent before targeted advertising, data sales, and profiling activities. The New Jersey Attorney General closed the consultation on the implementing rules for the Data Privacy Act, requiring parental consent for processing the personal data of children under 13.
Other governments focused on safeguards while children’s data is processed:
The United Kingdom’s Data (Use and Access) Bill received Royal Assent. Providers of online services likely to be accessed by children must implement data protection by design and by default measures that reflect children’s needs and limited risk awareness.
Canada’s Privacy Commissioner held a consultation on the Children's Privacy Code under the Personal Information Protection and Electronic Documents Act. The consultation covered age assurance mechanisms to determine when children are accessing services. Safeguards for children would include age-appropriate privacy measures, including obligations related to consent, data minimisation, and privacy by default. The code would further restrict tracking technologies and deceptive interfaces that influence privacy-related decisions for children.
China’s National Cybersecurity Standardisation Technical Committee consulted on a standard distinguishing between three levels of data protection in services accessible to minors: basic, enhanced interactivity, and age-appropriate optimisation.
The G7 Data Protection and Privacy Authorities issued a joint statement asking developers of emerging technologies to consider privacy protections for children. It outlines voluntary measures such as risk assessments, privacy-enhancing technologies, age-appropriate safeguards, and clear information.
Online content rules comprise content moderation requirements, related obligations, such as transparency, and user speech rights, providing redress against content moderation decisions. Content moderation requirements were most frequent, including rules regarding the protection of children, the moderation of specific content types, and general moderation regimes.
Moderation requirements addressing the protection of minors spanned across continents:
As part of the ongoing implementation of the European Union’s Digital Services Act, the European Commission adopted guidelines on the protection of minors. The guidelines recommend age assurance and minor protection measures that correspond to the risk level for minors. Protections for minors include reducing the risk of encountering harmful content, disabling features that contribute to excessive use and manipulative commercial practices, and privacy by default, among others. The Commission also released an age verification blueprint providing white-label age verification solutions until the European Digital Identity Wallets become operational.
The Australian eSafety Commissioner approved three industry codes for hosting services, extended internet carriage services, and extended internet search engine services under the Online Safety Act. The codes introduce safeguards to protect children from certain content, including pornography, violence, and self-harm. They take effect in December 2025.
The Cyberspace Administration of China held a consultation on draft measures on methods to classify online information that may affect the physical and mental health of minors.
India’s Ministry of Information and Broadcasting investigated the dissemination of prohibited content, blocking 43 streaming platforms for insufficiently protecting children.
Ireland’s Media Commission requested information from X on its compliance with the Online Safety Code, implemented in July 2025. The Act requires online platforms to enact effective age assurance measures if they contain pornography and protect users from harmful content. In a separate lawsuit, filed by X, the High Court upheld the Media Commission's authority over the Online Safety Code.
Various governments regulated the moderation of child sexual abuse and exploitation material:
The United States’ Ninth Circuit Court of Appeals ruled that X was liable for failing to report child sexual abuse material and for designing defective content reporting systems. At the state level, the Montana Act codifying grooming as an offence and expanding child sexual abuse provisions to include digital depictions entered into force. Furthermore, the Governor of Texas signed the Responsible AI Governance Bill, which prohibits the creation of child exploitation content and text-based sexualised dialogue impersonating minors using AI.
Australia’s Federal Court dismissed X’s appeal against a transparency notice from the eSafety Commissioner regarding measures addressing online child exploitation content.
The European Parliament adopted its position on the Directive on combating the sexual abuse and sexual exploitation of children and child sexual abuse material, which is under deliberation since February 2024.
The Republic of Korea launched an AI-based early response system for child and youth sexual exploitation online and concluded an inquiry into the handling of illegal content by online platforms, including child exploitation content.
Measures regarding content relevant to political discourse, including news, were also widespread:
The European Commission opened a consultation on guidance on the implementation of the Regulation on transparency and targeting of political advertising. The Regulation enters into force in October 2025. In addition, the Voluntary Code of Practice on Disinformation, which was integrated into the Digital Services Act, entered into force.
The Cyberspace Administration of China launched a two-month enforcement action into the dissemination of false information, including AI-generated content. A previous campaign tackled online accounts spreading false, unlawful, and defamatory information.
Brazil’s Committee on the Constitution, Justice, and Citizenship released an amended draft Bill prohibiting the use of AI in electoral advertising.
Singapore’s Ministry of Communications designated The Online Citizen's website and social media pages as declared online locations under the Protection from Online Falsehoods and Manipulation Act. The designation, issued due to the repeated dissemination of falsehoods, allows the platforms to operate but not to generate financial benefits, among other restrictions. It was issued after a previous designation expired and lasts until July 2027.
Other content moderation requirements covered several types of content:
Japan’s Ministry of Internal Affairs and Communications designated Google, LINE, Yahoo, Meta, TikTok, and X as large-scale telecommunications service providers under the Information Distribution Platform Regulation Act. The Act, in force since April 2025, demands both content moderation and transparency from designated platforms.
Vietnam’s Ministry of Public Security closed a consultation on the draft Cyber Security Law, which establishes content moderation obligations across five categories of content, among other requirements.
The Australian Administrative Review Tribunal issued a decision to overturn a removal notice by the eSafety Commissioner in a lawsuit relating to cyber-abuse.
Malaysia’s High Court issued a court order in favour of the Communications and Multimedia Commission in a lawsuit regarding two Telegram channels’ spreading of harmful content.
The Missouri Attorney General sent formal letters to Google, Microsoft, OpenAI, and Meta concerning alleged AI chatbot bias and inaccuracy.
Obligations related to content moderation focused on liability and transparency:
Brazil’s Supreme Court ruled that the liability shield in the Internet Civil Rights Framework was partially unconstitutional. Platforms can now be held liable without a prior court order, in serious cases involving criminal or harmful content.
The implementing regulation laying down templates for transparency reporting under the European Union’s Digital Services Act entered into force.
Russia adopted bills increasing fines for searching extremist content and advertising VPN services, and establishing liability for creating extremist content.
China conducted an enforcement campaign regarding the labelling of AI-generated content.
Finally, user speech rights were addressed in Europe. The European Media Freedom Act entered into force, introducing safeguards against the "unjustified removal" of content produced by media services providers that meet Member states’ "editorial standards". The European Commission also consulted on guidelines under the Act. Finally, the European Court of Human Rights ruled that Russia violated Google’s freedom of expression for fines following YouTube account removals.
We distinguish between four types of consumer protection rules: Age-based safeguards, fair marketing and advertising obligations, user rights, and quality of service requirements.
Age-based safeguards were rare but covered both the design of services and parental controls:
China’s National Cybersecurity Standardisation Technical Committee closed a consultation on a standard regarding the safeguarding of minors’ personal information in “Minor Mode”. Designed to protect children, Minor Mode includes daily usage limits of one hour for users under 16 and two hours for users between 16 and 18. Furthermore, it requires age-appropriate content controls, limits on addictive features, and parental controls. The Cyberspace Administration of China previously issued guidelines for mobile device manufacturers, app providers, and app stores to implement Minor Mode.
The United States Supreme Court denied an application to halt the Mississippi Protecting Children Online Act during ongoing litigation, without ruling on the Act’s constitutionality. The Act requires social media platforms to verify users’ age, demanding parental consent for minors to create accounts and heightened protection from harmful content. Similar litigation is underway for laws in Arkansas, Florida, Georgia, Ohio, and Utah.
Regarding fair marketing and advertising practices, governments addressed misleading information, the disclosure of advertisements, the prevention of deceptive interfaces, as well as hidden fees and fake reviews. The majority of developments focused on misleading information:
China’s State Administration for Market Regulation closed a consultation on draft regulations on live e-commerce, prohibiting misleading statements about product quality, performance, origin, sales volumes, and user reviews. The regulations also ban the use of AI to fabricate deceptive content. The Administration also conducted an investigation into live e-commerce.
France’s Directorate-General for Competition, Consumer Affairs and Fraud Control reached a EUR 40 million settlement with Shein/ISEL, for misleading consumers concerning price reductions and environmental commitments, among others.
The United States Federal Trade Commission reached a USD 14 million settlement with Match Group, owner of several online dating platforms, regarding deceptive advertising. The Commission also obtained an injunction against Ecom Genie, Alpine Management, and Vicenza Capital for misleading marketing and false earnings claims.
The Australian Competition and Consumer Commission issued public warning notices against “online ghost stores” that mislead consumers by pretending to be Australian, while engaging in drop-shipping from abroad.
The Republic of Korea’s Fair Trade Commission closed a consultation on amended guidelines on deceptive advertising. The Commission also fined Krafton and Com2uS KRW 2.5 million each for misrepresenting the chances of obtaining items in probability-based game purchases. The Communications Commission opened an investigation into SK Telecom for publishing misleading information on a cyber breach.
Another point of emphasis was the disclosure of advertisements, especially by influencers:
Italy’s Communications Regulatory Authority adopted guidelines and a code of conduct for influencers’ compliance with the Consolidated Law on Audiovisual Media Services.
Vietnam’s National Assembly passed a Bill amending the advertising law, introducing advertising transparency rules, including for influencers.
Brazil's National Consumer Secretariat issued a warning requiring clear advertising identification on social media.
China’s State Administration for Market Regulation issued enforcement guidelines for the Advertising Law, including transparency requirements.
Rules regarding deceptive interface designs, or dark patterns, were less frequent:
The European Commission opened a consultation on the Digital Fairness Act, including rules to address dark patterns and establish fair marketing requirements for digital platforms.
The Consumer Online Payment Transparency and Integrity Act was introduced to the United States Senate. The Act establishes that consent obtained through dark patterns is invalid and empowers the Federal Trade Commission to address automatic renewals.
The Republic of Korea’s Fair Trade Commission investigated e-commerce platforms under the amended Consumer Protection Act, focusing on new deceptive interface design rules.
Additionally, some governments focused their fair marketing rules on hidden fees and fake reviews:
The United Kingdom’s Competition and Markets Authority closed a consultation on guidance concerning price transparency under the Digital Markets, Competition and Consumers Act. The Authority also opened an investigation into online consumer review practices over potential non-compliance with guidance on fake reviews.
Singapore’s Competition and Consumer Commission sanctioned Quantum Globe for posting misleading AI-generated reviews on Sgcarmart, a platform for used cars.
Russia’s State Duma passed a Bill increasing fines for hidden fees.
In terms of consumer rights, governments focused on information and cancellation rights:
The United States Court of Appeals for the Eighth Circuit vacated the Federal Trade Commission's Negative Option Rule over procedural errors. The Rule aimed to simplify the cancellation process for subscriptions and memberships, requiring businesses to make cancellations as easy as sign-ups. Subsequently, the Click to Cancel Consumer Protection Act and the Unsubscribe Act were introduced to Congress, aiming to facilitate cancellations.
The Philippines’ Internet Transactions Act of 2023 entered into force, establishing comprehensive rules on user rights in online consumer transactions.
Poland's Competition Authority found that Booking’s ambiguous terms of service violated consumers’ rights to information. Among others, Booking failed to inform consumers about whether third parties on the Booking.com website were entrepreneurs.
The Republic of Korea’s Fair Trade Commission fined Tium Communication KRW 110 million for violations of the Electronic Commerce Act for not granting users access to withdrawal options and for failing to refund consumers who cancelled their contracts.
The European Parliament and Council reached a political agreement on an amendment to the Directive on alternative dispute resolution for consumer disputes, aiming to strengthen user rights in e-commerce disputes. The European Union also discontinued the European Online Dispute Resolution platform.
Regarding quality assurance, several governments addressed quality standards and the removal of certain unsafe products:
The European Commission accepted commitments by AliExpress in its investigation under the Digital Services Act. The commitments aim to address AliExpress’s failure to properly assess and mitigate systemic risks related to the dissemination of counterfeit goods and products that do not adhere to European safety standards. The Commission also issued preliminary findings that Temu breached the same obligations under the Digital Service Act. In addition, the European Parliament adopted a resolution on product safety in e-commerce, while a coordinated enforcement action analysed very large online platforms’ compliance with the General Product Safety Regulation.
China’s State Administration for Market Regulation held consultations on rules regarding e-commerce and live e-commerce, as well as food safety and industrial products specifically. The frameworks require providers to remove unsafe products.
The Philippines’ Department of Trade and Industry launched the E-Commerce Philippine Trustmark, designed to certify quality of service standards for e-commerce platforms.
Lastly, two governments focused on the identification of sellers:
Thailand’s Electronic Transactions Development Agency issued regulatory clarifications for designated online marketplace platforms. Such platforms are subject to additional procedural obligations, including mandatory identity verification for operators and sellers.
China’s State Administration for Market Regulation closed a consultation on the draft regulations on live e-commerce. The draft regulations mandate live e-commerce platforms to collect and verify identity and qualification information from streamers and marketing personnel before allowing them to operate on the platform.