Share

A Common Language For Online Safety

Our common language for online safety systematises regulatory requirements to guide policymakers around the globe. It serves as structure for our roundup, to document which governments are tackling which aspects of online safety, and how.

Report Image

Our common language for online safety systematises regulatory requirements to guide policymakers around the globe. It serves as structure for our roundup, to document which governments are tackling which aspects of online safety, and how.

Authors

Tommaso Giardini, Maria Buza

Date Published

19 Aug 2025

The need for a common language

Online safety provides common ground for governments across continents. Protecting citizens, especially children, online is a rare shared objective for governments. But governments are pursuing this common objective without a common language. As online safety rules keep growing in number and diversity, governments risk causing fragmentation and undermining their common objective.

The adverse effects of diverse online safety rules are complex, but not inevitable. Few rules cause fragmentation directly, by restricting access to certain services. More often, fragmentation occurs indirectly, and unwantedly, through firms’ bottom line. Compliance with diverse rules across markets is costly, especially for small firms. As these costs continue to rise, some firms will refrain from entering certain markets, while others will operate under considerable uncertainty. Notably, online safety must not be sacrificed for ease of business. This false dichotomy is the consequence of governments tackling the same problems unilaterally, and differently. International coordination can thus not only reduce fragmentation, by lowering compliance costs. It can also improve online safety globally, as governments learn from each other.

Three hurdles complicate international coordination on online safety. First, online safety comprises a complex mix of regulatory instruments, including rules on online content, data governance, and consumer protection. These instruments are governed by different authorities, with different goals, complicating coordination within and between governments. Second, governments are sensitive to external inputs: Who is to tell a government how to protect its own citizens and children? Third, some normative differences across borders persist, as evidenced by recent geopolitical tensions regarding content moderation.

The Digital Policy Alert’s common language for online safety lowers these hurdles to support effective rulemaking and international coordination. Building on our database of over 10’000 digital policy and enforcement developments affecting the digital economy, our team has systematised online safety rules into a common language for governments. We outline this common language below using salient examples from across the world. Then, every two months, we provide a roundup of online safety policy developments, structured along this common language – see the first edition here. Documenting which government authorities are tackling which aspect of online safety, and how, primarily reduces the hurdle of complexity. But it also shows governments that everyone is in the experimentation phase, motivating and facilitating learning from peers. This includes learning from approaches grounded in different normative views, which one may not want to emulate.

Our common language is a living document that we refine over time. We begin with a simple vocabulary covering core areas of online safety, but will continuously adapt to new insights from our daily tracking of digital policy developments worldwide. On the one hand, we may broaden the scope to other policy areas, such as cybersecurity. On the other hand, we may deepen the granularity, providing comparative analyses similar to our reports on data and AI rules. Finally, since the learning curve for online safety is in front of us all, we actively seek inputs and opportunities to collaborate

Understanding our common language

Our common language currently distinguishes four core areas of online safety:

1) Access restrictions: Which users can access which parts of the internet?

2) Data protection: How is children’s data protected? 

3) Online content rules: What types of content do users encounter?

4) Consumer protection: What safeguards are in place for users?

Access restrictions

Access restrictions are a strict, but increasingly common, regulatory instrument. We distinguish between bans, restricting access for all users, and age gates, restricting access for users below a certain age. 

Bans are relatively rare and apply either to specific products or to a range of services. The most salient product-specific ban is the United States’ requirement for ByteDance to divest TikTok or face a ban. Adopted in 2024, the ban was postponed repeatedly by the new administration. Other examples include Italy’s ban of Replika for data protection violations and Brazil’s ban of Rumble, Viet Nam’s ban of Telegram, and Turkey’s temporary ban of Instagram, all for violating a combination of rules on online content and local registration or representation. As these examples show, most product-specific bans occur as a consequence of enforcement action. Policy developments often ban more than one product, for instance the European Union’s AI Act lists prohibited AI practices.

Age gates have gained salience across continents in the past year. The common premise of these requirements is to verify users’ age before they access certain services. Depending on the requirement, providers must then either restrict access for users that fall below the age threshold or establish tailored safeguards for these users. Age-based access restrictions used to be narrow, mainly covering adult content sites. In the past year, their scope expanded considerably: “Social media bans” were adopted from Australia to Texas and are currently under deliberation from Brazil to the Philippines. Age-based safeguards address various aspects of online services, including the content users encounter, the way users’ data is processed, and the design of the interface. The same framework can establish safeguards spanning across policy areas, including the United Kingdom’s Online Safety Act and the European Union’s Digital Services Act. Age-based safeguards specific to one policy area are outlined in more detail in each of the three sections below.

Children’s data protection

We focus on children’s data protection for two reasons: The protection of children is central to online safety and the amount of data protection developments is immense – over 4’000 Digital Policy Alert entries relate to data governance.

To protect children’s data, governments impose prohibitions on certain data practices, parental approval requirements, and tailored safeguards during the processing of children’s data. 

Prohibitions on practices related to children’s data processing can be justified based on the nature of the data or the purpose of the processing. India’s Digital Personal Data Protection Act prohibits minors’ data processing if it causes detrimental effects on their well-being or allows for tracking or behavioural monitoring for targeted advertising. The Federal Trade Commission’s amended Children's Online Privacy Protection Rule demands that operators of mixed audience websites or online services refrain from collecting personal information before determining whether a user is under 13 years. Connecticut will prohibit the processing of minors' personal data for targeted advertising, sales, or profiling starting in January 2026.

Parental approval requirements demand explicit approval by parents (or guardians or legal representatives) for the lawful processing of children’s data to be lawful. Such requirements are widespread, including in Vietnam’s recently passed Law on Personal Data Protection, the European Union’s landmark General Data Protection Regulation. Some proposals narrow the parental approval requirement to specific contexts, for instance, a Brazilian Bill aims to prohibit the use of children's images in AI training without explicit parental consent.

Safeguards regarding children’s data processing relate to both default settings and measures while processing takes place. In terms of default settings, for example, the United Kingdom’s recently adopted Data (Use and Access) Bill requires online services likely to be accessed by children to design data protection measures that account for children’s greater need for protection and limited awareness of risks. The United Kingdom’s Age Appropriate Design Code (Children's Code), which also specified design measures for children, inspired similar frameworks in California and Vermont, among others. In terms of obligations during processing, for instance, Indonesia’s Personal Data Protection Law and child protection regulations require mandatory data protection impact assessments for the processing of children's personal data and the designation of a data protection officer responsible for safeguarding children’s data, among other measures.

Online content rules

Online content rules comprise content moderation requirements, related obligations, such as transparency, and user speech rights, providing redress for content moderation decisions.

Content moderation rules are widespread and protect both children and users at large. To protect children, governments establish obligations to reduce the amount of “harmful” content they encounter when using online services. These rules often apply in combination with age gates or to services that are likely to be accessed by children. Broader content moderation rules, that apply to all users, are more common. Governments have long required the moderation of “illegal” content, such as child sexual abuse material. Increasingly, governments are trying to address “harmful” content, such as disinformation. Notably, the scope of content that must be moderated is different across countries and is still being defined. Accordingly, our common language does not operate based on the labels of “illegal” or “harmful” content and instead focuses on the nature of the content to be moderated, specifically content depicting children in a sexual context and content that is relevant to political discourse (including news). Since many frameworks do not (yet) specify the nature of content to be moderated, we maintain a broad “other” category.

Obligations related to content moderation include liability rules, transparency requirements, and user controls. Liability rules assign responsibility for violations of content moderation rules, the crux being whether online service providers are held liable for users’ content. A salient liability regime is Section 230 of the United States Communications Decency Act, which establishes that “interactive computer service providers” are not to be treated as publishers of information provided by users. Transparency rules comprise both reporting obligations, such as yearly transparency reporting under the European Union’s Digital Services Act, and obligations that directly affect online services, such as labelling requirements for AI-generated content. User controls empower users to influence the content they see, for instance, by selecting the content category tags underlying algorithmic content recommendation. 

Finally, user speech rights either prohibit the moderation of certain content or empower users to contest content moderation decisions. These rights address concerns that online platforms exert excessive influence on online discourse, which escalated after platforms, including Meta, Twitter (now X) and YouTube, suspended Donald Trump’s accounts in the aftermath of 6 January 2021. Albeit rarely, several governments prohibit the moderation of certain content, typically content and accounts related to political matters. In the United States, state-level bills from Texas and Florida restricted the moderation of such content, sparking constitutional litigation that is still ongoing. The same rationale also motivates mechanisms that empower users, including requirements to inform users on the reasons behind content moderation decisions and to offer simple dispute resolution.

Consumer protection

We distinguish between four types of consumer protection rules: Age-based safeguards, fair marketing and advertising obligations, user rights, and quality of service requirements.

Age-based safeguards demand protections for certain age groups on online services, including both measures on the design of services and parental controls. Design measures include usage limits, content controls, and reductions for addictive features, among others. China’s “minor mode” for the mobile internet is a salient example of such design measures, including rigorous usage limits. Kenya’s Industry Guidelines for Child Online Protection and Safety, on the other hand, establish principles to safeguard children’s rights to information and safe use of ICT products. Parental controls range from requirements for parental consent to create accounts to mechanisms for parents to influence the experience of minors on online services. Currently, state-level bills in the United States demanding parental consent for minors’ use of social media are under litigation 

Fair marketing and advertising obligations require online service providers to engage in truthful communication with consumers. The focus lies on requiring providers not to publish misleading information on the price, quality, and sustainability of products. Recently, governments also focused on the disclosure of advertisements and the prevention of deceptive interfaces (dark patterns). A salient example is the Digital Fairness Act, currently being deliberated in the European Union. Finally, governments are developing rules against hidden fees, including the United States Federal Trade Commission’s rule on unfair or deceptive fees, and fake reviews, such as the United Kingdom’s Digital Markets, Competition and Consumer Act.

Governments establish both new user rights and mechanisms to enable users to exercise their rights. Governments continuously establish online consumer rights, for instance, to request information and cancel subscriptions with ease. Vietnam, for instance, grants users the rights to detailed transaction-related information, to comment on and request compensation for defective products, among others. Recently, governments’ also focused on dispute resolution mechanisms and unfair terms and conditions, to avoid retribution against users who exercise their rights.

Finally, quality of service requirements require e-commerce platforms and sellers to ensure that products uphold quality and safety standards, sometimes leading to requirements to remove certain (unsafe) products. Finally, some governments request e-commerce platforms to identify sellers, in order to increase transparency and recourse for consumers.