Every quarter, we provide a roundup of global developments on online safety, structured along our common language. To support policymakers, we document which government authorities are tackling which aspect of online safety, and how,

Every quarter, we provide a roundup of global developments on online safety, structured along our common language. To support policymakers, we document which government authorities are tackling which aspect of online safety, and how,
This roundup provides policymakers working on online safety with a structured overview of international developments. We summarise insights from the Digital Policy Alert’s (DPA) daily tracking of developments worldwide, structured along our common language for online safety. Every finding links to the DPA entry, providing an extensive summary and the official government source. Users can filter and download the full DPA dataset and subscribe to our tailored notification service for free.
We focus on developments between 15 October 2025 and 15 January 2026, in four areas:
Access restrictions, including Australia's implementation of the Social Media Minimum Age Act, the United Kingdom's enforcement on pornography age verification, the European Parliament's resolution calling for a digital minimum age of 16, and Indonesia and Malaysia's temporary suspensions of Grok over harmful AI-generated content.
Children’s data protection, including settlements against Disney and Google for Children's Online Privacy Protection Act violations in the United States, China's implemented security requirements for processing minors' sensitive personal information, India's Digital Personal Data Protection Amendment Bills mandating verifiable parental consent, and the United Kingdom’s investigation into children's privacy in mobile games.
Online content rules, including the European Commission's EUR 120 million fine against X for breaches of the Digital Services Act, South Korea's amended Network Act expanding content moderation and transparency obligations, Malaysia's implemented Online Safety Act, and the United Kingdom's investigation into X over child safety and Grok-generated content.
Consumer protection, including the European Commission's preliminary findings against Meta regarding dark patterns, the United Kingdom’s investigations into hidden fees on ticketing and e-commerce platforms, lawsuits against Roblox and Character Technologies over child safety violations in the United States, and South Korea's fines against streaming services for obstructing subscription cancellations.
Access restrictions are a strict, but increasingly common, regulatory instrument. We distinguish between bans, restricting access for all users, and age gates, restricting access for users below a certain age. Governments applied bans to both specific products and ranges of services:
In early January, reports emerged that the Grok AI chatbot on the X platform was used to generate and disseminate non-consensual sexualised images, including undressed images of individuals and sexualised images of children, prompting investigations in several jurisdictions and access restrictions. The Indonesian Ministry of Communications and Digital has temporarily suspended access to the Grok application and requested clarification from X regarding potential negative impacts associated with its use.
The Malaysian Communications and Multimedia Commission imposed a temporary restriction on access to the Grok application on the X platform, which was lifted after 12 days, after X implemented additional preventive and safety measures. The Commission also announced the commencement of legal action against X Corp and xAI for failing to prevent harmful, non-consensual AI-generated content.
In Vietnam, the Digital Technology Industry Law came into force, banning harmful AI practices such as manipulative systems, exploitative profiling, and certain biometric or facial recognition uses, as well as digital activities that threaten state interests, individual rights, public health, or social order. The National Assembly also adopted the Law on Artificial Intelligence, prohibiting AI systems that mislead, manipulate, target vulnerable groups, or spread false content, endangering national security or safety.
Denmark’s Ministry of Culture closed a consultation on draft amendments to the Copyright Act that would prohibit the manufacture, import, and use of tools enabling unauthorised access to encrypted and access-restricted content.
India's Ministry of Electronics and Information Technology closed the consultation on the draft implementing rules for the law banning certain online money games, which would establish a game classification framework, registration scheme, and oversight authority.
In the United States, at the state level, California implemented an Act that prohibits AI developers and deployers from implying healthcare licensure or presenting outputs as licensed medical advice. Furthermore, a bill to prohibit the sale of toys with companion chatbots was introduced in the California Senate.
Beyond bans, several governments demanded age verification to restrict access for users below a given age threshold, focusing on social media, artificial intelligence (AI) companions, and adult content websites:
Denmark’s government reached a political agreement on Digital Child Protection that includes measures to restrict access to social media platforms for children under 15.
The European Parliament adopted a resolution on protecting minors online, calling for 13 to be set as the minimum age for any social media use, and 16 as the digital minimum age for access to social media and AI companions without parental consent.
In the United States, at the federal level, the House Committee on Energy and Commerce announced the Reducing Exploitative Social Media Exposure for Teens Act, which would prevent covered platforms from allowing minors to open or maintain accounts. A separate bill was introduced in the Senate to require age verification and prohibit minors’ use of AI companions.
In the United States, a federal judge issued a preliminary injunction blocking the enforcement of Texas's App Store Accountability Act, which would have required app stores to verify users’ ages and link any minor’s accounts to a parent before allowing any transactions. In Florida, the Artificial Intelligence Bill of Rights was introduced, requiring companion chatbots to prohibit minors from creating or maintaining accounts unless a parent or guardian provides consent.
In Brazil, the Ministry of Justice concluded a consultation on digital age verification standards for digital services accessed by children. Proposed measures include privacy-preserving technologies such as digital credentials, age tokens, and zero-knowledge proofs.
Italy implemented requirements for 48 pornographic platforms regarding the deployment of age verification systems to block minors’ access.
The United Kingdom's Office of Communications (Ofcom) released guidance on the placement of age verification on pornography sites and closed a consultation on an updated draft guidance on highly effective age verification methods. Ofcom also closed consultations on its statutory reports on the use and effectiveness of age assurance and the role of app stores in children's online safety. Regarding enforcement, Ofcom fined Itai Tech GBP 50,000 for failing to implement effective age verification on its nudification site Undress, with an additional GBP 5,000 fine for failure to respond to a statutory information request. Further, Ofcom issued provisional notices of contravention to 8579 LLC and Kick Online Entertainment for failure to implement age assurance measures, and expanded investigations into Cyberitic, XGroovy, First Time Videos, and Sun Social Media for failure to respond to formal information notices. Ofcom closed an investigation into Trendio after the company demonstrated good-faith steps towards meeting its duties, and confirmed that AVS Group introduced new age assurance measures across all adult websites following a GBP 1 million fine for failing to implement age checks on 18 pornographic sites. Finally, Ofcom opened investigations into XXBrits, Porntrex, Fapello, Hqporner, and Novi over compliance with age assurance requirements to prevent children's access to pornographic content. Notably, Novi's investigation additionally examines children's access to its generative AI service Joi.
In addition to access restrictions, several governments demanded age verification to establish tailored safeguards for users below certain age thresholds. Here, we outline one framework with safeguards spanning across online content, children’s data, and consumer protection. The sections below focus on each of these policy areas and open with developments related to children.
Australia set minimum age requirements for social media use and established safeguards through the registered industry codes to limit children’s exposure to harmful material. Under the Social Media Minimum Age Act, platforms such as Facebook, Instagram, Snapchat, TikTok, YouTube, X, Threads, and Reddit must take reasonable steps to prevent users under 16 from holding accounts. Two days after the Act came into force, Reddit filed a lawsuit challenging its validity because it places an unjustified burden on political communication. Separately, a bill to repeal the social media minimum-age requirement was introduced in the Senate.
Alongside these access restrictions, three industry codes registered under the Online Safety Act entered into force. The codes set out safeguards to protect children from Class 1C and Class 2 material. Class 1C material covers consensual fetish content and unclassified games that contain either actual sexual activity or simulated sexual activity that is explicit and realistic. Class 2 material includes content inappropriate for children, such as non-violent sexual activity between adults, simulated gambling, high-impact violence, violence instruction, and material promoting self-harm, suicide, or eating disorders.
The internet search engine services code requires providers to implement age verification or default safety settings, parental controls, and filtering tools to limit children’s exposure to Class 1C and Class 2 material. Providers must also display crisis support information, establish processes for reporting illegal content and handling user concerns, review and improve algorithms to enhance safety, engage with safety organisations, maintain trust and safety teams, and submit compliance reports to eSafety upon request.
The hosting services code requires third-party hosting providers to maintain policies or contractual terms obliging customers to comply with Australian content laws, take proportionate enforcement action in cases of breach, and establish contact mechanisms for end-users, including information on eSafety and complaint processes. Providers must respond to communications from eSafety, assess and mitigate risks arising from significant service changes that could increase children’s access to harmful material, and submit compliance reports to eSafety upon request.
The internet carriage code requires providers to inform users about available filtering tools, promote the family-friendly filter program, and explain users’ rights to raise concerns about Class 1C and Class 2 material. Providers must link to the eSafety complaints process, maintain procedures for handling user reports, and make online safety information publicly available. On request, providers are also required to submit compliance reports to eSafety outlining the measures taken to meet the Code’s requirements.
The eSafety Commissioner published the online safety codes and standards regulatory guidance to assist service providers in applying the codes and standards, including risk assessments, reporting, and compliance obligations. It also clarifies which services are covered and how the regulations interact, and provides additional guidance for areas such as generative AI and age assurance.
To protect children’s data, governments impose both restrictions on data processing and safeguards during data processing. Several governments restrict the processing of minors’ data:
Ghana's National Information Technology Agency consulted on the draft Electronic Transactions Act, which includes measures prohibiting targeted advertising based on children’s personal data, both carried out directly or through automated profiling systems.
France's Data Protection Authority issued guidance on political communication tools, clarifying that targeting children under 17 using personal data is prohibited. It also sets limits on profiling with sensitive data and requires record-keeping for targeting tools, including AI.
In India, two bills were introduced to the Parliament to amend the Digital Personal Data Protection Act to establish protections on minors’ data. The first bill requires data fiduciaries to obtain verifiable parental or guardian consent before processing a child’s data and prohibits tracking, behavioural monitoring, or targeted advertising aimed at children. The second bill would prohibit the processing of data that is likely to cause a “detrimental effect,” including practices such as behavioural profiling, data sharing that may expose children to risks like identity theft, online harassment, or addiction.
In the United States, at the federal level, the Don’t Sell Kids’ Data Act was introduced in the House of Representatives to restrict data brokers from collecting, using, or retaining the personal data of known minors. At the state level, Connecticut implemented a law prohibiting the processing of minors’ personal data for targeted advertising, sales, or profiling, while Oregon amended its Consumer Privacy Law to prohibit targeted advertising, profiling, and the sale of personal data of individuals under 16.
Most governments focused on parental approvals and consent before processing minors’ data:
In the United States, at the federal level, the Children and Teens' Online Privacy Protection Act was introduced to House of Representatives to require parental consent for collecting data from children under 17 and ban its use for personalised advertising. Separately, courts approved settlements for violations of the Children’s Online Privacy Protection Act. Disney must pay USD 10 million, implement parental consent and compliance reporting, while Google agreed to a USD 30 million class-action settlement for collecting personal data from children under 13 who viewed child-directed content on YouTube without parental consent. At the state level, Virginia proposed amendments to its Consumer Data Protection Act requiring verifiable parental consent for users under 18. In California, video game developer and publisher Jam City settled for USD 1.4 million for sharing data of users aged 13-15 without consent, and Sling TV agreed to a USD 530,000 settlement and to provide parental protection tools.
Ghana's Ministry of Communication, Digital Technology, and Innovations closed a consultation on the Data Protection Act, which would require consent of a parent or legal guardian for processing a child's personal data.
China implemented security requirements for processing sensitive personal information, requiring providers to verify the age of users and obtain guardian consent for those under 14. The Cyberspace Administration also opened a consultation on draft measures for the administration of anthropomorphic interactive AI, which would require providers to obtain guardian consent from users who are minors.
The National Assembly of Vietnam adopted amendments to the Cybersecurity Law, which include a ban on the collection, exploitation, or sale of children’s personal data without guardian consent.
Other governments focused on safeguards while children’s data is processed:
Canada's Office of the Privacy Commissioner announced its participation in the 2025 Global Privacy Enforcement Network sweep, alongside 30 other privacy authorities. The initiative will examine data collection and transparency, as well as measures to protect children’s privacy and limit tracking, profiling, and exposure to harmful content.
The Cyberspace Administration of China (CAC) opened a consultation on the internet application personal information collection and use regulations, requiring technical safeguards to prevent alteration or loss of minors’ data. Separately, CAC set a deadline for the end of January 2026 for submitting audit reports required under the Regulations on the Protection of Minors on the Internet.
Germany's Conference of Independent Data Protection Supervisory Authorities adopted a resolution calling for amendments to the General Data Protection Regulation to strengthen children’s protection. It proposes limits on consent for the processing of sensitive data, reinforces data protection by design and by default, and calls for the systematic consideration of children’s risks in data protection impact assessments.
Ghana's Ministry of Communication, Digital Technology, and Innovations closed consultations on the Data Protection Act, which would require government authorisation for transfers of children’s data.
In India, another bill was introduced to amend the Digital Personal Data Protection Act. It redefines a “child” as an individual aged 13–16 instead of under 18 and expands lawful grounds for processing to include the protection of children’s and vulnerable groups’ rights. It also allows the Central Government to impose risk-based restrictions on tracking, behavioural monitoring, and targeted advertising directed at children, rather than banning all such activities.
Kenya's Office of the Data Protection Commissioner issued guidance on the processing of children's data pursuant to the Data Protection Act and its regulations. It covers data minimisation, privacy by design, impact assessments, retention, transparency, and breach notifications to ensure processing serves the child’s best interests.
The United Kingdom's Information Commissioner's Office opened an investigation into children's online privacy in ten popular mobile games to assess each game's compliance with default privacy settings, geolocation controls, and targeted advertising practices, among others.
Connecticut implemented the Act Concerning Social Media Platforms and Online Services, Products and Features, expanding the Data Privacy Act by defining “heightened risk of harm to minors” to include anxiety, compulsive use, violence, harassment, sexual exploitation, and illegal activities. It requires controllers to mitigate such risks through data protection assessments, imposes safeguards for precise geolocation, and mandates impact assessments for profiling-based services with transparency measures.
Online content rules comprise content moderation requirements, related obligations, such as transparency requirements, and user speech rights, providing redress against content moderation decisions.
Content moderation requirements were most frequent, including rules regarding the protection of minors, the moderation of specific content types, and general moderation regimes. Moderation requirements addressing the protection of minors spanned across continents:
Australia's eSafety Commissioner issued legal notices to four AI companion providers, Character Technologies (character.ai), Glimpse.AI (Nomi), Chai Research Corp (Chai), and Chub AI (Chub.ai), requiring them to report measures taken to protect children from online harms, including sexually explicit content and self-harm.
In Brazil, the content classification ordinance came into force for physical and broadcast media, with obligations for internet-app provisions coming into force in March 2026. It requires age rating symbols and parental controls, based on content themes such as violence, sex, and drugs.
The State Administration for Market Regulation of China adopted technical requirements for children’s watches for users under 14. The standard establishes restrictions on pre-installed applications and mandates anti-addiction mechanisms and curated content libraries.
France's Paris Public Prosecutor's Office launched a preliminary investigation into TikTok over risks to minors, including easy access and algorithmic promotion of harmful content.
Ghana's Cyber Security Authority closed a consultation on the Draft Cybersecurity (Amendment) Bill, which would require service providers to protect children from online violence and cyberbullying.
Malaysia implemented the Online Safety Act, which requires service providers to implement child-specific safeguards, including content moderation, safety tools, reporting mechanisms, and support channels. Platforms must also limit children’s exposure to harmful content by restricting adult interactions, regulating recommendations, addressing addictive features, and protecting personal data.
Spain's National Commission for Markets and Competition closed consultations on a co-regulation agreement and code of conduct for audiovisual content rating. The framework establishes unified age ratings and content descriptors aimed at protecting minors, sets display rules across services, and includes a structured user complaint mechanism.
In Turkey, two bills were introduced to the National Assembly to establish protections for minors in the digital environment. The first bill seeks to create a preventive and protective framework requiring platforms to mitigate algorithmic risks to children, including exposure to harmful and gambling-related content, implement age verification and parental consent measures, publish algorithmic transparency information, and submit child risk assessment reports. The second bill proposes amendments to the Penal Code to clarify the scope of obscenity and reinforce protections for children by criminalising the access, distribution, and promotion of pornographic content.
The South Korean Media and Communications Commission ordered X to establish safety measures to protect youth in relation to the AI chatbot Grok service.
The United Kingdom's Office of Communications (Ofcom) closed consultation on the draft protection of children code of practice for user-to-user services. The code requires providers to assess and implement proactive technologies to detect harmful content, establish crisis response protocols, enforce user sanctions, refine age assurance criteria, and update terms of service to reflect these measures. Separately, Ofcom opened an investigation into X under the Online Safety Act to assess whether it conducted adequate risk assessments, protected children from harmful content, including pornography and illegal material generated by the Grok model on X.
The Guidelines for User Age-verification and Responsible Dialogue Act were introduced to the United States Senate. They prohibit AI chatbots from soliciting or encouraging minors to engage in sexually explicit behaviour or create/share explicit images, and from promoting suicide, self-harm, or violence.
Various governments regulated the moderation of child sexual abuse and exploitation material:
Australia’s eSafety Commissioner announced several regulatory actions under the Online Safety Act. These included Telegram discontinuing its challenge to the validity of a reporting notice requesting information on its compliance with online safety expectations, including measures to address child sexual exploitation material. Additionally, the Commissioner issued an enforcement action against a UK-based provider of AI nudification services, which subsequently restricted access for Australian users, in connection with concerns about safeguards against the creation of child sexual abuse material. The Commissioner also issued a notice to X regarding the use of the generative AI system Grok, seeking information on safeguards in place to meet systemic safety obligations relating to the detection and removal of child sexual exploitation and other unlawful material.
The European Commission proposed a regulation to further extend the temporary derogation from the ePrivacy Directive, allowing online service providers to voluntarily detect and remove child sexual abuse material. The proposal serves as an interim measure pending the adoption of a long-term EU framework to combat online child sexual abuse, to which the Council recently adopted its approach. Further, the European Commission sent a request for information to Shein under the Digital Services Act following preliminary indications that illegal goods, including child-like sex dolls, are being offered on the platform.
The French Directorate General for Consumer Affairs announced the removal of illicit products from Shein and authorised the resumption of operations after corrective measures were implemented, following an investigation into listings for child-like sex dolls and alleged failures to protect minors.
Kenya implemented the Computer Misuse and Cybercrimes (Amendment) Act, giving courts the power to order the removal and closing of systems in connection with a person who has been convicted of illegal activities, including child sexual abuse material, terrorism, or extreme religious and cultic practices.
Malaysia implemented the Online Safety Act, designating content related to child sexual abuse as "priority harmful content" and requiring it to be made permanently inaccessible upon such determination. The Communications and Multimedia Commission announced that between January 2024 and November 2025, it had identified 957 cases of child-related obscene content and secured the removal of 899 posts, corresponding to a compliance rate of 94% from the part of the social media platforms.
The United Kingdom’s Department for Science, Innovation, and Technology tabled amendments to the Crime and Policing Bill to prevent the misuse of AI models for creating child sexual abuse material. The bill was previously amended to criminalise the possession and publication of pornographic content depicting strangulation or suffocation. The Office of Communications (Ofcom) closed consultations on the draft illegal content codes of practice for user-to-user services, which set out recommended content moderation measures, including the review and removal of illegal content such as child sexual abuse material. Ofcom also issued updated guidance for file-storage and file-sharing, clarifying obligations for detecting and removing such content. In enforcement, Ofcom completed compliance remediation with Snap, addressing concerns over Snapchat’s risk assessments and record-keeping for illegal content.
At the federal level in the United States, the Stop Sextortion Act was introduced to the Senate to criminalise the threat to distribute visual depictions of child sexual abuse material with the intent to intimidate, coerce, extort, or cause emotional distress to a person. At the state level, Texas implemented the Responsible Artificial Intelligence Governance Act, requiring developers to implement safeguards preventing AI systems from generating illegal content, including child sexual abuse material. The Governor of California signed the Law on Companion Chatbots, which requires operators of companion chatbots to take additional safeguards for minors, including break reminders and prohibitions on sexually explicit material. Finally, Oklahoma implemented the Bill prohibiting the nonconsensual sharing of AI-generated sexual images, amending the Law on Obscenity and Child Sexual Abuse Material to cover the dissemination of private sexual images, including those created or altered using AI or other technical means.
Singapore signed the Criminal Law (Miscellaneous Amendments) Act, expanding the definition of “intimate image” to cover AI-generated material, criminalising non-consensual image production, and clarifying that computer-generated child abuse material is prohibited.
Measures regarding content relevant to political discourse, including news, were also widespread:
Singapore’s Minister for Digital Development and Information directed the Infocomm Media Development Authority to block access to MalaysiaNow in Singapore for failing to comply with a correction direction under the Protection from Online Falsehoods and Manipulation Act. Additionally, the applicability of the order declaring Kenneth Jeyaretnam's website, The Ricebowl Singapore, and his social media pages on Facebook, Instagram, X, and LinkedIn as "declared online locations" ended.
Ghana's Ministry of Communication, Digital Technology, and Innovations consulted on the Misinformation, Disinformation, Hate Speech, and Publication of Other Information Bill. It includes measures allowing a court or relevant authorities to direct the Communications Authority to require internet service providers to block access to online locations disseminating misinformation, disinformation, or hate speech.
The Netherlands' Data Protection Authority issued a report on the use of AI chatbots as voting aids, concluding that they often deliver biased and polarised political advice and misrepresent the Netherlands’ multi-party system. The Authority advised citizens not to rely on such tools and recommended that developers introduce safeguards, noting that AI systems influencing elections are classified as high-risk under the EU AI Regulation.
The European Commission opened a consultation on the draft implementing regulation setting out the technical arrangements for the EU repository of online political advertisements mandated under the Regulation on the transparency and targeting of political advertising. It sets technical requirements and clarifies the interplay with the advertising repositories under the Digital Services Act.
Other content moderation requirements covered several types of content:
In Australia, a bill was introduced in the Senate to establish a complaints mechanism for non-consensual deepfakes, granting the eSafety Commissioner powers to investigate and require platforms to remove such content within prescribed timeframes.
The European Commission fined X EUR 120 million for breaching transparency obligations under the Digital Services Act (DSA), citing the deceptive use of the “blue checkmark”, insufficient transparency in its advertising repository, and failure to grant authorised researchers access to public data. The Commission also issued preliminary findings that Meta is not compliant with the DSA’s transparency obligations and accepted TikTok's commitments on advertising transparency under the DSA. Finally, the General Court dismissed Amazon's action seeking annulment of its designation as a very large online platform.
Canada's Radio-television and Telecommunications Commission issued the Broadcasting Regulatory Policy, reviewing the definition of Canadian content for television and online streaming services.
The Cyberspace Administration of China (CAC) issued a notice regulating internet celebrity accounts, mandating platforms to update community rules and user agreements to prohibit vulgarity, distortion of public morals, false information, cyberbullying, and unlicensed activities. The CAC also issued the draft measures for the administration of anthropomorphic interactive AI to restrict such systems from generating or disseminating certain categories of content, including material that endangers national security, promotes obscenity, gambling, or violence, incites crime, or infringes the legitimate rights and interests of others. In enforcement, the CAC issued results of investigations into illegal military-related self-media accounts, illegal online accounts involving veterans, and the use of AI to impersonate public figures for live-streaming marketing, removing over 8,700 illegal items.
France's Government announced an investigation into X over Grok-generated sexist and sexual content.
In India, two legislative proposals were introduced to the Parliament. The first bill criminalises creating or sharing deepfakes without consent or identifying watermarks. The second bill amends the Information Technology Act to criminalise online harassment and deepfakes and requires intermediaries to maintain grievance redressal mechanisms. Additionally, the Ministry of Electronics and Information Technology closed consultations on draft amendment rules addressing synthetically generated information, which would require visible labelling, metadata embedding, and traceability for all public-facing synthetic content. The Ministry also opened an investigation into X over alleged misuse of Grok AI to generate and disseminate obscene content.
The Italian Authority for Communications fined Cloudflare EUR 14.25 million for failing to adhere to orders regarding the disablement of domain name services and traffic routing linked to illegal live sports streaming.
Malaysia's High Court issued an interim injunction against Telegram and channels over the dissemination of harmful content. Separately, the Communications and Multimedia Commission requested information from TikTok regarding its internal monitoring and enforcement mechanisms against criminal misuse and announced the commencement of legal action against X and xAI over failure to prevent harmful and non-consensual AI-generated content.
South Africa's Film and Publication Board opened an investigation into the distribution of private sexual films or photographs without consent and issued a take-down notice to social media platforms.
Switzerland's Federal Council opened a consultation on the Federal Law on Communications Platforms and Search Engines that mandates that providers establish a procedure for users to report content they believe to be unlawful, covering offenses such as depictions of violence, defamation, threats, discrimination, and incitement to hatred. Providers must process these reports, decide on actions in a timely manner, and promptly inform reporting users of their decisions.
In Turkey, a bill was introduced that would mandate the removal of AI-generated content violating personality rights, threatening public safety, or using deepfake technology within 6 hours.
The United Kingdom’s Secretary of State adopted amended regulations designating cyberflashing and the encouragement or assistance of serious self-harm as priority offences. Online service providers must treat these offences as a priority under their illegal content duties, including preventing misuse of their services and removing or limiting user exposure in line with Ofcom’s codes of practice.
Obligations related to content moderation focused on liability and transparency:
A bill was introduced to Argentina’s Chamber of Deputies establishing a special legal action to protect individuals’ image rights against unauthorised AI-generated content, allowing judges to order blocking, removal, or permanent deletion, with fines or platform suspension for non-compliance.
Ghana's National Information Technology Agency closed a consultation on the Electronic Transactions Act, which includes measures exempting intermediaries from liability when acting as a “mere conduit” or hosting content without knowledge of infringement, provided they remove infringing material upon notification. Very large online platforms and search engines are subject to additional obligations, including risk assessments, audits, and biannual transparency reporting.
In Russia, a bill was introduced to the State Duma to establish administrative liability for violating mandatory requirements for labelling video materials created using AI technologies.
The Algorithm Accountability Act was introduced to the United States Senate to amend Section 230, limiting liability protection for social media platforms and requiring them to exercise reasonable care in the design and operation of recommendation algorithms to prevent foreseeable bodily injury or death.
The Machine-Created Intellectual Asset Bill was introduced to India’s Parliament, including provisions establishing a safe-harbour for AI intermediaries that host or transmit AI content. Intermediaries would be exempt from liability if they do not control or modify content, exercise due diligence in moderation, traceability, and safety, and promptly remove illegal content when notified. Separately, another bill was introduced that would hold digital navigation service providers liable if faulty algorithms, misleading data, or incorrect mapping cause bodily harm, injury, or death.
Singapore signed the Online Safety (Relief and Accountability) Bill requiring platforms to take reasonable measures to address specified online harms upon receiving notice, with larger platforms subject to additional requirements, such as shorter response timelines.
South Korea’s National Assembly adopted a partial amendment to the Network Act, which will come into force in July 2026. It expands prohibited content to include material that incites violence or discrimination based on race, nationality, region, gender, disability, age, or social status, and bans the distribution of false or manipulated information. Large-scale service providers must establish policies to identify and manage such content and publish semi-annual transparency reports.
Finally, regarding user speech rights, South Korea’s amended Network Act requires large-scale online service providers to notify users of content moderation actions and offer an objection procedure, including access to dispute mediation. The Act also prohibits such providers from blocking or terminating the services of media companies and internet news providers. Additionally, Switzerland’s Federal Council opened a consultation on the Federal Law on Communications Platforms and Search Engines, which would require major platforms to provide free internal complaint procedures to allow users to challenge content decisions. Users could also take disputes to authorised out-of-court bodies.
We distinguish between four types of consumer protection rules: Age-based safeguards, fair marketing and advertising obligations, user rights, and quality of service requirements.
Age-based safeguards were rare but covered both the design of services and parental controls:
Australia’s eSafety Commissioner announced that Apple and Google removed the video app OmeTV from their app stores following an investigation into alleged non-compliance with the Relevant Electronic Services Industry Standard. The app was found to lack required child safety features and to allow adults to engage in randomised video chats with children without adequate safeguards.
Brazil's Chamber of Deputies passed the Bill on the regulation of digital influencer activities, which would require judicial approval for children in paid audio-visual work, assessing psychological risks, schooling impact, and income management to prevent exploitation.
The State Administration for Market Regulation of China adopted technical requirements for children’s smartwatches, covering anti-addiction and guardian-controlled features, secure communication and emergency functions, content controls, labeling, account management, and compliance testing.
Kenya implemented the Industry Guidelines for Child Online Protection and Safety, requiring ICT providers to implement age verification, strengthen default privacy settings, and develop child-focused products and services that promote creative and educational online engagement.
The United Kingdom's Office of Communications (Ofcom) closed its consultation on the updated draft illegal content codes of practice for user-to-user services, which includes provisions on anti-harassment user controls and default protections for minors.
At the federal level, several proposals were introduced to the House of Representatives. The Algorithmic Transparency and Choice Act would require social media platforms to provide minors with clear information, default input-transparent algorithms, and options to limit personalised recommendations. The Safer Guarding of Adolescents from Malicious Interactions on Network Games Act would require online video game providers to implement default, parent-controlled safeguards limiting minors’ communications with other users. The Stop Profiling Youth and Kids Act would restrict social media platforms from using engagement-driven features for research on children under 13 and require parental consent for such research on teens under 17. The Safe Messaging for Kids Act requires social media platforms and app stores to restrict ephemeral messaging and direct messaging for minors, provide robust parental controls, and prevent circumvention of these safeguards. The House Committee on Energy and Commerce also announced a new version of the Kids Online Safety Act 2025, which would require platforms to implement policies and safeguards protecting minors from physical, sexual, substance-related, and financial harms, including limits on communication, compulsive design features, and use time.
At the United States state level, Nebraska adopted the Age-Appropriate Online Design Code Act. Virginia implemented amendments to the Consumer Data Protection Act prohibiting addictive social media feeds. The California Department of Justice and New York State Attorney General's Office closed consultations on implementing rules for the Protecting Youth from Social Media Addiction Act and the Stop Addictive Feeds Exploitation for Kids Act, respectively. The Attorneys General of Texas and Florida filed lawsuits against Roblox Corporation over alleged deceptive practices and failures to implement child protection measures, while the Attorney General of Kentucky filed a lawsuit against Character Technologies over alleged child safety violations on its AI chatbot platform.
Regarding fair marketing and advertising practices, governments addressed misleading information, the disclosure of advertisements, the prevention of deceptive interfaces, as well as hidden fees and fake reviews. The majority of developments focused on misleading information:
Australia's Australian Competition and Consumer Commission filed a lawsuit against Microsoft alleging misleading conduct in the marketing of its AI, cloud computing, and software products to consumers.
In China, the revised Anti-Unfair Competition Law entered into force, prohibiting operators from making false or misleading advertisements or facilitating fake reviews and transactions.
South Korea’s Fair Trade Commission opened investigations into eight online advertising agencies over alleged deceptive advertising practices following the establishment of an Online Advertising Agency Illegal Conduct Response Task Force.
The United Kingdom's Competition and Markets Authority released updated guidance on unfair commercial practices under the Digital Markets, Competition and Consumer Act, including a list of 32 always-banned practices, and the categories of unfair practices linked to misleading actions, misleading omissions, aggressive practices, and failures of professional diligence.
In the United States, at the federal level, the Federal Trade Commission approved a settlement with telemedicine company NextMed over deceptive online advertising practices, set aside its prior ruling in the lawsuit against Rytr for generating false reviews using AI-powered tools, and warned 10 unnamed companies against potential violations of the Consumer Review Rule concerning fake reviews on e-commerce and advertising platforms. At the state level, forty-two state Attorneys General sent a letter to 13 AI companies, including Anthropic, Apple, and Character Technologies, raising concerns about sycophantic and delusional outputs from generative AI systems.
Vietnam implemented the Law on Advertising, requiring all advertising, including on digital platforms, to be truthful, clearly labelled, and non-misleading.
Another point of emphasis was the disclosure of advertisements, especially by influencers:
Brazil's Chamber of Deputies passed the Bill on the regulation of digital influencer activities, which would require influencers on user-generated content platforms to clearly disclose sponsored content and advertisements. Separately, the Chamber received a bill defining influencer activity in electronic media that would establish rules on advertising disclosure and image use.
Italy's Italian Communications Regulatory Authority and the Institute for Advertising Self-Regulation adopted a framework agreement on compliance with commercial communications rules in the digital age, establishing coordinated oversight of advertising disclosure obligations for online advertising providers and user-generated content platforms.
Vietnam’s Government adopted a decree detailing certain provisions of the Law on Advertising, which mandates fair marketing and advertising disclosure requirements for online advertising providers and search service providers, with entry into force scheduled for February 2026.
Rules regarding deceptive interface designs, or dark patterns, were less frequent:
The European Commission issued preliminary findings that Meta is in non-compliance with the Digital Services Act (DSA)'s transparency obligations, finding that Facebook and Instagram provide complex and potentially deceptive “dark pattern” reporting processes that hinder the removal of illegal content.
The Irish Media Commission opened an investigation into TikTok and LinkedIn under the DSA, following concerns that their illegal content reporting mechanisms may involve potentially confusing “dark patterns.”
A Partial Amendment to the Telecommunications Business Act was introduced to South Korea’s National Assembly to establish design rules prohibiting dark patterns on user-generated content, e-commerce, and other online platforms. In addition, the amended guidelines for e-commerce entered into force, providing specific interpretation standards and recommendations for regulating dark patterns following amendments to the Electronic Commerce Act, while the Fair Trade Commission fined Coupang KRW 2.5 million for employing deceptive dark patterns.
Additionally, some governments focused their fair marketing rules on hidden fees and fake reviews:
Australia's Australian Competition and Consumer Commission filed lawsuits against HelloFresh and YouFoodz, alleging misleading and deceptive conduct regarding subscription services, including representations about cancellation processes and recurring charges.
The Cyberspace Administration of China issued rules on internet platform pricing behaviour. Platform operators and businesses must clearly display all prices for goods and services. This includes listing item names, prices, units, and all related charges such as delivery fees.
In India, the bill amending the Consumer Protection Act was introduced in Parliament. It expands the definition of “unfair trade practices” to include undisclosed terms, fees, or dynamic pricing, and addresses algorithmic influence on consumer behavior and barriers to cancelling subscriptions or returning goods.
The United Kingdom's Competition and Markets Authority (CMA) opened investigations into online ticketing companies Stubhub and viagogo concerning transparency of mandatory fees, as well as into e-commerce retailers Wayfair, Marks Electrical, and Appliances Direct over alleged practices including automation of purchases and misleading time-limited sales terms. The CMA also released updated guidance on unfair commercial practices and new guidance on price transparency under the Digital Markets, Competition and Consumer Act.
The United States Federal Trade Commission and 21 state Attorneys General filed an amended complaint against Uber over alleged deceptive billing and cancellation practices pertaining to its subscription service. Additionally, the Commission entered into a USD 60 million settlement with Instacart over deceptive practices relating to subscription membership services.
In terms of consumer rights, governments focused on information and cancellation rights:
The Council of the European Union adopted the revised Alternative Dispute Resolution Directive, updating and simplifying procedures for resolving consumer–trader disputes outside the court.
The South Korean Ministry of Science and Information and Communication Technology also closed its consultation on the Enforcement Decree of the Framework Act on Development of Artificial Intelligence and Establishment of Trust Foundation, which would establish user rights in relation to AI systems. Additionally, the Fair Trade Commission fined the streaming service providers Content Wavve KRW 4 million, NHN Bugs KRW 3 million, and Spotify AB KRW 1 million for obstructing consumer contract and subscription cancellations.
Russia signed a law regulating paid digital subscriptions, requiring services to notify users of upcoming payments, allow cancellations, and prohibit charging removed bank cards, with entry into force scheduled for March 2026.
Regarding quality assurance, several governments addressed quality standards and the removal of certain unsafe products:
The European Union's Regulation on the safety of toys entered into force with a grace period, requiring all toys placed on the EU market, including those sold online, to meet updated safety and quality standards and to carry a Digital Product Passport to prevent unsafe products from entering the market.
The State Administration for Market Regulation of China closed its consultation on draft regulations on supervision and management of third-party platform providers' implementation of safety responsibilities and issued an action plan together with 14 other departments to regulate the quality and safety of industrial products sold on e-commerce platforms. Additionally, the national standard on e-commerce platform service quality evaluation also entered into force, establishing an evaluation index system for assessing platform service quality.
Japan implemented amendments to the Consumer Product Safety Act, introducing quality and safety requirements for manufacturers and importers of specified children’s products. It allows authorities to require the removal of unsafe products from listings and prohibits the sale of non-compliant items lacking required standards and warnings.
In the United States, the Attorneys General of 23 states sent a joint letter to Shopify regarding the unlawful sale of e-cigarettes on its platform, raising concerns about the company's compliance with product safety and authorization requirements.
Vietnam's National Assembly adopted the Law on Artificial Intelligence, which establishes quality-of-service requirements across the AI lifecycle, mandating human oversight, safety, data quality, transparency, accountability, and effective incident management. Obligations scale by risk level, with stricter risk management, data quality, and remediation duties for high-risk AI systems.
Lastly, governments focused on the identification of sellers:
The Cyberspace Administration of China opened a consultation on the draft internet application personal information collection and use regulations, which would require app distribution platforms to verify the identity and compliance of app operators before listing, prioritise certified apps, and remove non-compliant applications within six months. Similar registration and verification obligations apply to smart terminal manufacturers for pre-installed apps. Furthermore, the Shenzhen Municipal Market Supervision Bureau fined Xiaoe Network Technology RMB 360,000 for failure to verify the qualifications of sellers and prevent false advertising on its platform.
Singapore Parliament adopted the Online Safety (Relief and Accountability) Bill, which would authorise the Office of the Commissioner of Online Safety to require online platforms to implement user identification requirements and establish accountability mechanisms for content and transactions.