Age Verification Enforcement Concerns

Many pirates have been writing about the growing global movement to enforce age verification on social media and other online forums. We thus started asking our communities for feedback on this issue. This blog outlines the reasons for our concerns. We outline a number of studies and media that document the issue. PPI will continue to monitor this situation and attempt to submit statements at the UN and other international forums to block age verification enforcement.
Australia now leads with the world’s first national social media ban for children under 16. In January 2026 France formally passed a law to restrict minors under 15 from accessing mainstream platforms, necessitating age verification for platforms like TikTok, Instagram and Snapchat. This measure was criticized by those who highlight privacy concerns and ease of circumvention. Europe is following suit. Ireland is preparing comparable bans, which it hopes to advance at the EU level during its upcoming presidency, with many other countries also contemplating similar measures. In the UK, enforcement of age assurance systems has become a flashpoint under the Online Safety Act 2023. Within the USA as well, states have pursued distinct age-related restrictions. Virginia’s Senate Bill 854 imposes limits on minors’ social media usage.
When addressing child safety issues on Big Tech platforms, we must avoid ways that will inadvertently turn the Internet into a digital panopticon
The Brookings Warning: Safety Laws With Unintended Consequences
The Brookings Institution’s 2025 analysis of U.S. children’s online safety legislation
https://www.brookings.edu/articles/childrens-online-safety-laws-are-failing-lgbtq-youth/
Many of the age verification laws, as we know it, have attracted intense criticism as in order for them to be practically enforceable, users will inevitably have to submit to face scans and ID checks in order to verify their age. The end result is that the gatekeepers, namely governments, platforms and third-party verification providers, will be able to retain anything that traces any given accounts to any linked face biometrics and IDs, which presents massive negative implications for the right to privacy, and also the right to free expression.
Governments of some countries, particularly Malaysia and Turkey, are even shrewder in wanting to turn the Internet into a de facto digital panopticon, which doesn’t help things at all except to benefit those in power including political elites and Big Tech companies, as the Malaysian government has announced that, by mid-2026, social media platforms must verify users’ ages using national identity cards, which reportedly make their age verification law to become the strictest in the world, which will outrightly block anyone under the age of sixteen from using social media services, and seemingly excludes the comparatively privacy-safe pathways such as age inference methods of analyzing a user’s behavior, location, and account history, along with Needemand’s BorderAge solution which reportedly can check any user’s age by looking at their hand movements, instead of their faces and IDs. Malaysia’s Communications Minister Fahmi Fadzil calls it “a good mechanism to verify that somebody is below or above the age of 16.” The stated goal is noble: shield children from scams, harmful videos, and other online dangers that have surged in recent years.
As for Turkey, the authorities there are even more explicit in wanting to forcibly link every social media accounts to government IDs.
But here is a blunt warning: give governments and Big Tech an inch on mandatory ID checks, and they will take a mile, just like what happened in Hong Kong when the National Security Law, passed in 2020, is followed by an arbitrary digital search law where anyone living in there are forced to consent to digital checks by police, including the disclosure of login credentials, under the threat of criminal penalty.
https://www.bbc.com/news/articles/ce8j9yj52lro
What begins as “think of the children” can easily becomes the infrastructure for mass surveillance, political censorship, and the erosion of anonymous speech—the very freedoms the internet was built to protect. We have seen this playbook before, particularly in Russia.
Australia’s under-16 social media ban, rolled out in late 2025 with facial ID verification, was sold as child protection. Advocates for children and young people with disabilities warned it would backfire. They were right, as autistic teens who rely on platforms to communicate at their own pace, build friendships without the pressure of in-person interaction, or simply feel less alone have been cut off from their primary social outlets.
As one 14-year-old put it after losing access: “It feels like I’ve lost my friends.”
The ban isolates the very kids who need connection most, while determined teens simply create new accounts or use VPNs. The collateral damage was predictable—and ignored. Similar laws in the United States and elsewhere are already failing LGBTQ+ youth. Brookings Institution analysis shows that vague “online safety” rules—often paired with age gates—lead platforms to over-censor content about identity, mental health, and community support. For many queer and trans teens in unsupportive families, the internet is a lifeline.
Besides, social media platforms are also important for people with chronic or rare diseases, such as HIV and Long COVID, who find communities of fellow patients, activists in authoritarian countries who use social media to organize, and artists who built careers on these services. They too will get severely disadvantaged if those privacy-infringing blanket ban laws are passed across the globe.
https://theintercept.com/2026/03/05/kosa-online-age-verification-free-speech-privacy/ (https://archive.is/IbsEo)
In particular, for political activists and dissidents, along with corporate whistleblowers, the privacy-infringing laws will leave them vulnerable to harassment and repressions. Instances of transnational repressions like (this https://www.politico.eu/article/russia-spy-recruit-pressure/) will only become more common. Just as a perspective, Erika Cheung, a whistleblower who exposed Theranos’ frauds, was briefly an “anonymous source”, presumably in fear of corporate retaliation.
https://blogs.und.edu/und-today/2024/12/theranos-whistleblower-ethics-above-all-including-profit/
When activists and whistleblowers are snuffed out by such a digital panopticon, the distance between what is said and what is known to be true will become an abyss. Of all the things at risk, the loss of an objective reality is perhaps the most dangerous. The death of truth is the ultimate victory of evil. When truth leaves us, either by letting it slip away or letting it ripped from our hands, we become vulnerable to the appetite of whatever monster screams the loudest.
Turkey’s decision to explicitly destroy online anonymity inevitably makes one question whether the true reason behind the push for age verification laws worldwide are really to protect the children or not. After all, if those governments are actually serious about children’s well being, they would’ve invested in things like air cleaners in schools and cures for long COVID, given that there are many researches demonstrating that long COVID had harmed children’s well being by a lot, even as governments worldwide had deemed the COVID pandemic to be long over.
https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(25)00496-7/fulltext
https://www.nature.com/articles/s43856-025-00947-y
https://www.chicagotribune.com/2025/09/30/long-covid-children-lurie-study/
https://spectrumnews1.com/oh/columbus/news/2026/04/20/long-covid-symptoms-children
A State-Level Path to Defend Digital Privacy
There is also a practical democratic strategy in the United States. In 18 states, citizens can propose constitutional amendments themselves by collecting signatures and putting the amendment directly before voters. This means that digital privacy protections do not have to wait for legislators. One possible amendment could state: “Everyone has the right not to be subjected to unreasonable governmental intrusions on digital privacy, including the right to anonymity.” Such a proposal could help protect people against privacy-intrusive age verification mandates, digital ID systems, and other forms of compulsory online identification.
https://ballotpedia.org/Initiated_constitutional_amendment
Possible solutions to address child safety issues at Big Tech services without privacy-infringing face and ID checks.
To be extremely clear, we are not denying the fact that various issues such as enshittification on social media platforms have lead to direct impacts of mental health of their users, particularly juveniles, in light of the latest news that juries in the US have held Facebook/Meta and YouTube liable for manipulative designs—infinite scroll, autoplay, persistent notifications—that hook children into compulsive use. However, we firmly believe that improving child safety on these platforms should not come at the cost of accidentally turning the digital world into a totalitarian panopticon which will only benefit the elites after all. The problem is not that children are online; it is that platforms are engineered to exploit them for profit.
Amnesty International called the verdict “landmark,” demanding fundamental changes to platform architecture rather than blunt bans or ID mandates.
https://www.hardresetmedia.com/p/new-court-filings-google-youtube-snapchat-teens
Even prominent critics of social media harms, such as Jonathan Haidt in The Anxious Generation, have expressed reservations about universal government-mandated ID checks. Haidt rightly stresses the importance of parenting and cultural solutions alongside regulation. Blanket surveillance is not the answer. There are privacy-preserving ways to protect children. Pirates propose smarter, targeted alternatives that respect civil liberties:
- Age-appropriate hardware and network-level controls. Issue or certify restricted devices for children that can only connect to IP addresses and subnets explicitly registered for kid-safe content. Websites voluntarily publish their age-appropriate endpoints to national (or even international) registries. Children’s devices query these registries for DNS and routing, while firewalls enforce strict allow-lists. Non-compliant sites are easily detected and penalized—automated, scalable, and requiring zero per-user ID checks or facial scans. Parents, schools, and device makers decide what enters the network; governments do not gatekeep speech.
- A free opt in walled garden for kids with maximum protection within it to stop kids being exposed to content they shouldn’t be & severe penalties including incarceration & heavy financial fines for those who deliberately put prohibited content within it. A possible solution for that is for ICANN to introduce a .kid domain which can be only acquired by services that have been vetted as child-safe. This would be easy for default whitelists in DNS filters for parenting control applications on routers and so on.
- Platforms must assess whether their service is “likely accessed by minors” (using public data on user base, without invasive per-user checks). If so, apply privacy by default settings: Upon account creation, their account creation should be made private by default (content hidden from non-connections), geolocation off by default, no behavioral profiling for recommendations or ads to minors, data minimization (collect/retain only what’s strictly necessary), and no “nudges” to weaken privacy settings or share extra data. Any changes in privacy settings should happen after user are given due warnings. To mitigate unintended consequences (i.e. harsh inactive account deletion policies which will cause the erosion of historical integrity in the long term as they will inevitably affect accounts of now-deceased users), older accounts can be exempted or grandfathered from the default settings.
- Targeted safety tools. Expand trusted flagger programs for rapid CSAM removal. Prohibit or limit manipulative design patterns (autoplay, infinite scroll tweaks, fake urgency) that exploit developing brains. Demand interoperability so third-party parental controls and filtering extensions can integrate seamlessly.
- Prohibit profiling-based ads for anyone inferred as a minor (via context or self-declaration) and ban behavioral ads platform-wide to remove the core incentive for mass data collection. Enforce strict data minimization and ban selling/inferring age data via brokers.
- Algorithmic accountability – In the US, the bipartisan Algorithm Accountability Act introduced by Senators Mark Kelly and John Curtis offers a model: impose a duty of care on recommendation algorithms so companies can be sued for foreseeable harms like addiction, self-harm amplification, or radicalization. Furthermore, there should be options to limit the user feeds to posts by followed users in chronological order.
Beyond child safety: other issues in Big Tech platforms that need to be addressed
Beyond children, Big Tech’s enshittification demand fixing as harsh inactive-account deletion policies (especially Google’s) erase digital lives and undermine historical memory. Even worse, inadequate human support for lockouts in Facebook and Google leaves ordinary users stranded, as countless stories on forums like r/facebookdisabledme attest.
https://www.reddit.com/r/GMail/comments/1iqyzn4/locked_out_of_my_gmail_account_tried_their/
For harsh inactive-account deletion policies, we have already opined our thoughts and solutions in this previous article (https://pp-international.net/2026/02/digitallegacies/) which include mandating some large scale web services that can conceivably include Apple, AOL, Bluesky, Discord, Facebook, Github, Google (including YouTube), Mastodon.social, Microsoft, Instagram, LinkedIn, Proton, Pinterest, Reddit, Roblox, Steam, Threads, TikTok, Twitch, WordPress, X, and Yahoo, to have thanatosensitivistic functions which lets users to decide what to do with their accounts if they die. It’d be great if there’s an European Citizens’ Initiative to tackle the issue. As for inadequate human support in account lockout situations at large-scale social media platforms, particularly Facebook and Google, we believe that there should be a legislation or a measure that will mandate these services to provide adequate amounts of support for its users.
The Pirate Party has always stood for a free, open internet where innovation serves people, not the other way around. We reject the false choice between child safety and civil liberties. We can—and must—build systems that protect the vulnerable without handing governments and corporations the keys to every citizen’s digital identity. Plans like Malaysia’s and Turkey’s are not protections; they’re the thin end of the wedge. Pirates say: not one inch. Here we’re also calling for digital blackouts on various platforms, like what happened during the anti-SOPA protests in 2012, along with real life demonstrations to protest against those privacy-intrusive laws.
Let us choose freedom, creativity, and genuine accountability instead. The open internet is worth defending—for our children and for ourselves.
Babak Tubis, Member of the Boards of PPi and the German Piratenpartei comments: “By now, 1984 is long outdated. 2026 shows just how much more the big tech companies have taken over this role, whilst governments hunger for this knowledge instead of fulfilling the mandate they received from their citizens.”
Keeping Our Kids Safe Without Losing Our Privacy Online
Have you noticed the news lately? Countries around the world—like Australia, France, and soon maybe others in Europe—are passing laws to keep young people off social media. The goal is to protect kids from online harm.
That sounds great, right? But there is a huge catch.
The Hidden Trap: The “Digital Panopticon”
To stop teenagers from using these apps, governments are telling Big Tech companies to check our ages. How do they do that? By asking everyone to upload a government ID or scan their face.
If this happens, the internet becomes a “digital panopticon”—a place where you are constantly watched. If we give platforms and governments our private IDs, we lose our right to be anonymous online.
This does not just hurt our privacy. It hurts people who truly need the internet:
- Vulnerable Youth: LGBTQ+ teenagers or kids with disabilities often rely on the internet to find friends and support when they feel alone in the real world.
- Whistleblowers and Activists: People who speak out against bad companies or unfair governments need anonymity to stay safe. If everyone must show an ID, speaking the truth becomes dangerous.
The Real Problem: Addictive Design
The core issue is not that children are using the internet. The real problem is how these platforms are built.
Big Tech companies design their apps to be addictive. They use endless scrolling, auto-playing videos, and manipulative algorithms to keep us glued to our screens so they can make more money. Banning users does not fix this broken system.
Solutions: What We Can Do as Users
We do not have to choose between child safety and our civil rights. As internet users, here is what we can do and what we should demand:
- Demand “Privacy by Default”: We should push for rules that force apps to be safe from the very beginning. When a minor creates an account, tracking, location sharing, and targeted ads should be turned off automatically.
- Use Local Tech Solutions: Parents can use network-level blocks, special child-safe routers, or devices designed just for kids. This keeps children safe without handing over our biometric data to big corporations.
- Support Algorithm Accountability: We must demand that platforms are held responsible for the harm their algorithms cause. We should have the option to see simple, chronological feeds instead of manipulative, algorithm-driven content.
- Speak Up for the Open Web: Support organizations like the Pirate Party or other digital rights groups. Join protests or digital blackouts to show politicians that we will not trade our privacy for flawed safety laws.
The Bottom Line
We can build an internet that protects young people without turning it into a massive surveillance machine. Let’s fix the broken platforms, not punish the users. Digital rights matter!
————————————————————————————————–
The following message was prepared by members of the PPI Discord community and PPI Board. It does not necessarily reflect the views of all PPI members, but we hope it does. If any of our members have competing ideas about this issue or any other issue that they would like us to broadcast, please share them with us. We are happy to broadcast a variety of ideological opinions and diverse issues. Our goal is to create positive communication to solve problems.
