Can Age Checks and Curfews Really Protect Kids Online?
Written by Catarina Santos - Data Protection Expert
The UK’s Online Safety Act aims to protect children online with age verification and potential social media curfews. But are these measures workable or even legal? In this thought-provoking article, Catarina Santos explores the privacy risks, technical limitations, and global lessons that the UK should consider before enforcing these child safety proposals.

Will Proof-of-Age and Social Media Curfews Under the Online Safety Act Actually Work?
The UK’s Online Safety Act introduces one of the most comprehensive frameworks for regulating online content to date. Among its more debated proposals are two high-impact, child-focused measures: mandatory proof-of-age verification and a potential legally enforced social media curfew for under-18s. While the public discussion has largely centred on intent—protecting children from harm online—the critical issues lie in feasibility, privacy, and precedent.
As data protection and information security professionals, we believe these measures warrant deeper analysis, especially given the serious implications for data protection, user rights, and technical enforcement.
Proof of age
The Online Safety Act mandates that platforms hosting potentially harmful content accessible to children must take active measures to prevent underage access. While the Act doesn’t prescribe a single, uniform age verification method, it strongly encourages the use of age assurance mechanisms, particularly for high-risk content such as pornography, gambling, and social media features that could be addictive or algorithmically manipulative.
Under the Act, platforms are required to assess the potential risks to children using their services and ensure that children have access only to age-appropriate content. This includes enforcing age restrictions consistently across platforms, making it clear to users what measures are in place to protect children from harmful content. These age verification mechanisms may include:
- ID-based verification (e.g., government-issued IDs or payment cards)
- AI-driven age estimation (e.g., facial recognition)
- Third-party age assurance tools (e.g., parental controls, third-party digital identity services)
- Mobile network/SIM-based authentication
However, the Act does not mandate a specific method. Instead, platforms are expected to take a proportionate approach based on the nature of the content and the platform. They must also ensure that these age restrictions are enforced consistently and transparently, with clear communication of these measures in their terms of service.
Despite these flexible requirements, there are trade-offs with each method:
- Biometric technologies (like facial recognition) raise concerns under UK GDPR, particularly around lawful basis for processing and data minimisation.
- ID submissions (e.g., government-issued IDs) increase the risk of data breaches and identity theft, particularly when dealing with younger users who may not fully understand the risks involved.
Age verification frameworks have been trialled in Germany, where the Kommission für Jugendmedienschutz (KJM) began issuing enforcement orders in 2021 against adult content platforms that failed to implement robust age gates. These efforts, however, drew criticism from German privacy and civil rights groups such as Gesellschaft für Freiheitsrechte (GFF) and Chaos Computer Club, who warned that such systems eroded online anonymity and lacked transparency about data retention.
Similarly, in France, legislation passed in 2023 authorised ARCOM (Autorité de régulation de la communication audiovisuelle et numérique) to mandate age checks on adult sites. Non-compliance could result in site blocking. This sparked strong opposition from digital rights organisation La Quadrature du Net, which argued that the measures created an infrastructure for mass digital identification, with little oversight or clarity on data protection.
In both jurisdictions, concerns were raised that age verification—though well-intentioned—risked breaching Article 8 of the European Convention on Human Rights, which guarantees the right to privacy.
What about social media curfews?
The notion of a legally mandated curfew—preventing under-18s from accessing platforms like TikTok, Instagram, and Snapchat after 10pm—is now under active consideration by UK policymakers. Technology Secretary Peter Kyle recently acknowledged the potential to act in this space, referencing TikTok’s voluntary 10pm shutdown feature for under-16s as a possible model.
While the motivation is understandable—late-night usage has been linked to sleep disruption and increased vulnerability to online harms—the implementation is far from straightforward. A legally enforceable curfew would require platforms to:
- Continuously monitor account activity
- Link that activity to a verified age
- Restrict access based on UK time zones and age brackets
This raises obvious questions about technical feasibility and proportionality. Any system that enables real-time age-based content restrictions risks intrusive tracking and surveillance of young users. It also presumes universal compliance by platforms and seamless integration across services—conditions that are not currently met.
Moreover, such curfews may prove ineffective in practice. Children and teenagers are often more digitally agile than policy anticipates. Workarounds like VPNs, secondary accounts, or logging in via devices registered to adults could easily circumvent restrictions. Worse still, excessive restrictions may push vulnerable users toward less regulated, offshore platforms where they are at greater risk.
Both Germany and France offer cautionary lessons. In each case, aggressive legislative attempts to introduce age verification have been hampered by having legal challenges from civil society groups, public backlash over data privacy and data protection and technical uncertainty about accuracy and coverage.
In Australia, the eSafety Commissioner has also trialled age assurance tools as part of a broader child safety initiative. However, rollout has been cautious, with a focus on balancing protection with privacy, and an acknowledgment that no single verification system can yet meet all the criteria of accuracy, inclusivity, security, and usability.
Even in the United States, where several states have passed age-appropriate design bills inspired by the UK’s earlier code, implementation has been slowed by constitutional challenges over free speech and privacy.
The UK is at a regulatory crossroads. The Online Safety Act presents an opportunity to improve digital protections for children. But that ambition must not be undermined by rushed implementation or headline-driven policy. For any age verification or curfew measure to be credible, it must:
- Be technically feasible without disproportionate data collection
- Meet UK GDPR and human rights standards on privacy and freedom of expression
- Include transparent enforcement mechanisms and public accountability
Be part of a wider ecosystem that includes education, parental controls, and platform design changes. Without these safeguards, we risk enacting policies that are symbolic rather than effective and potentially damaging to privacy rights for all internet users.
The intention to protect children online is both right and necessary. But the tools we choose must not compromise the very principles we seek to uphold.
Written by Catarina Santos – Data Protection Expert