Governments and regulators, even those who have long supported and encouraged better privacy for digital users, are now pushing for online age verification. In a world that has given businesses an open hand for decades in targeting minors for advertising, they now want to protect them from adult content, gambling, and other content they deem restricted. Who controls such restrictions is itself concerning. Regardless, any age verification (online or offline) cannot be privacy-preserving.
Age verification inherently carries profound privacy risks and threatens free speech. It facilitates data misuse and entrenches the monopolies of centralised tech giants, offering no real benefits to users who must share their sensitive legal identities.
Any concrete proof of age requires hard identifiers, such as government-issued IDs, facial biometrics, or other uniquely identifying documents. When users must reveal their identity, they expose sensitive personal data, endangering their privacy. The very act of verification creates a digital footprint that entities can track, aggregate, or misuse. Simply put, no matter how minimal the data shared, there is always potential for harm.
Age verification also erodes anonymity, forcing everyone to prove their identity before accessing content and eliminating the ability to participate pseudonymously. This disproportionately impacts marginalised groups who rely on internet anonymity for safety or self-expression. By demanding overt identification, the internet becomes less inclusive and vulnerable to significant regression. Even minimal identity confirmation threatens anonymity. Any government pushing for such verification can exploit these same systems for broad surveillance tomorrow. Centralised databases of user identities tied to browsing habits could become tools for monitoring dissent, tracking behaviour, and targeting individuals.
What stops the government or any organisation from using vast repositories of verified user data to fuel their so-called AI systems? With the speed at which laws change, citizens are left with no choice but to either abide by them or call for widespread protests. However, with their data already under government control, dissent becomes easier to subdue.
Decentralised, open-source platforms that have emerged over the last decade face immense challenges in complying with age verification laws. The requirement for real-time identity checks, data verification, and storage is technically and financially feasible for well-funded centralised corporations but prohibitive for smaller or decentralised platforms and communities. Only big-tech monopolistic platforms have the resources to build robust verification systems and absorb legal risks. Meanwhile, decentralised innovations prioritising privacy and freedom struggle to survive under stringent compliance demands.
Lastly, sharing physical identity documents or biometric data carries no direct consumer advantage over existing systems. Instead, it exposes users to the risk of being excluded from services if verification fails. While protecting children might be at the forefront of these laws, the same goal can be achieved through better education (which many governments today choose not to prioritise) rather than by eroding personal rights and exposing children’s information to multiple platforms—potentially pushing minors into more harmful digital environments.
It is easier to leave a platform altogether than to share such private information and then pay to access the various subscriptions they offer.