More than 60% of Australian teenagers who had social media accounts before December 10, 2025 still have access to at least one of those platforms. The ones doing most of the circumventing are using their parents' face IDs, VPNs, and, in at least one documented case, a printed mesh mask from Temu to fool biometric age checks. Five months into the world's first national social media ban for under-16s, Australia has become something of an unintentional comedy of digital governance — except the privacy implications are no joke at all.
Into this environment walked Jimmy Wales. In town for a speaking event at RMIT University in Melbourne and an interview with Crikey published May 1, the Wikipedia co-founder didn't hold back. Wikipedia founder brands Australia's social media ban "a really bad idea" — and his case rests not on sympathy for Silicon Valley platforms but on a concern that's easy to overlook in the culture war noise around children and screens: privacy. His biggest worry, he told Crikey, is "the erosion of privacy online and in many places around the world."
That framing resets the entire debate.
The Law, and What It's Actually Done
Australia's Online Safety Amendment (Social Media Minimum Age) Act 2024 passed parliament on November 29, 2024 and came into force on December 10, 2025. It prohibits users under 16 from holding accounts on Facebook, Instagram, Snapchat, TikTok, X, YouTube, Reddit, Threads, Twitch, and Kick. Social media companies face fines of up to $49.5 million per breach if they fail to take "reasonable steps" to prevent under-16s from signing up or keeping existing accounts.
To comply, Snap alone has locked or disabled more than 415,000 Australian accounts belonging to people it believes are under 16. The eSafety Commissioner Julie Inman Grant has accused platforms of allowing children who already declared themselves underage to make repeated attempts at verification. In late March 2026, Communications Minister Anika Wells confirmed investigations into Facebook, Instagram, Snapchat, TikTok, and YouTube for potential violations. Snap Newsroom
In a survey of 1,050 Australians aged 12 to 15 conducted last month, the UK-based Molly Rose Foundation found more than 60% of teens who had social media accounts before the ban still had access to at least one of those platforms. About two-thirds of young users say these platforms have taken "no action" to remove or reactivate accounts that existed before the restrictions. Fortune
The ban achieved something genuinely impressive: it unified two groups that rarely agree. Australian teenagers think it doesn't work. Australian parents think it doesn't work. Polling from December 2025 showed 70% of voters endorsed the ban — and 67% believed it would not achieve its aims.
Those two data points sitting in the same poll are extraordinary. A democratic government passed a law that its own supporters expected to fail. And now, five months on, the supporters appear to have been right.
What Jimmy Wales Is Actually Arguing
Wales isn't defending Meta or TikTok. That's the misread that flattens this critique into a corporate-versus-government frame. His argument is more fundamental: the mechanisms required to enforce an age ban — biometric identity verification, government ID checks, device-level age signals — create privacy infrastructure that can and will be used for things beyond protecting children.
"One of the biggest concerns I have right now is the erosion of privacy online and in many places around the world," Wales said in his Crikey interview. The point isn't theoretical. Australia's law requires platforms to collect identity signals to determine whether a user is over or under 16. That data — linked to actual government IDs or biometric scans — becomes a new category of sensitive personal information held by companies that have a poor historical record of securing it. Crikey
"The problem with age verification mandates isn't that they're ineffective — though they are. It's that they're effective at the wrong thing. They don't stop teenagers from accessing content. They do create centralised identity verification infrastructure that didn't exist before, and that infrastructure is worth far more to bad actors and surveillance-minded governments than anything teenagers post on Instagram." — Jimmy Wales, speaking to Crikey, May 1, 2026
Wales published The Seven Rules of Trust in October 2025 through Penguin Random House, a book that examines what made Wikipedia work — and what breaks trust in institutions. His intervention on the Australian ban fits directly into that framework. The credibility problem with heavy-handed digital regulation isn't just that it fails teens. It's that it erodes public trust in the regulatory bodies that impose it when the failures become obvious.
A Privacy Trade Nobody Fully Calculated
Here's where the analysis gets uncomfortable for the ban's supporters. The Australian Child Rights Taskforce took issue with the law on the grounds that it could disincentivize platforms from building better child-safety features — since, in theory, under-16s aren't supposed to be on the platform at all. That's a real-world consequence that received almost no attention during parliamentary debate.
Digital Industry Group Inc., the Australian nonprofit representing digital companies, argued the ban would push online users under 16 to access unregulated and potentially more dangerous parts of the internet. A 14-year-old who can't get on Instagram doesn't necessarily stop consuming content. She opens a Telegram group, a Discord server, or browses 4chan. None of those platforms are covered by the ban. Some are considerably more dangerous than anything TikTok's algorithm serves up. Fortune
Reddit's legal challenge in the High Court makes a pointed version of this argument: "A person under the age of 16 can be more easily protected from online harm if they have an account, being the very thing that is prohibited." Reddit also argues the law violates the Constitution by restricting political discourse of young people — a line of argument that will take months to resolve but that puts the government in the awkward position of defending the proposition that teenagers have no constitutional right to read news online. Wikipedia
Snap CEO Evan Spiegel made a similar point in an op-ed published in the Financial Times on February 18, 2026. "Compliance with the law does not guarantee that Australian teens will be safer or better off," Spiegel wrote. "Research published in JAMA Pediatrics found that moderate social media use appears to support adolescent wellbeing, especially for Australian teens in grades 7-12. The optimal approach appears to be thoughtful engagement and moderation, not total prohibition." Snap Newsroom
The Global Stakes of Australia's Experiment
Australia's government was explicit that this law was intended to lead internationally. Prime Minister Anthony Albanese called it the first domino. The phrase "world-first" appeared in every government press release. International momentum has already grown: in his New Year's Eve 2026 address, the President of France pledged to "protect our children and teenagers from social media," and interest has been reported from Indonesia, Malaysia, Greece, Romania, and Denmark. Springer
That makes the failure mode dangerous far beyond Australia's borders. If governments across Europe and Southeast Asia import Australia's enforcement model — platform-level age verification with significant financial penalties for non-compliance — they'll also import its privacy architecture and its circumvention patterns. France, which has been developing its own proposed age verification legislation for several years, is watching closely. So are regulators in the UK, where the Online Safety Act 2023 already imposes significant obligations on platforms accessed by children.
The US is further behind on legislative coherence but moving fast at the state level. At a January 28, 2026 FTC workshop on age verification technologies, FTC Chair Andrew Ferguson previewed that the FTC may support an amendment to COPPA and issue a policy statement on age verification, stating that "the flourishing of our nation's children depends on the privacy of their personal data and on the capacity of parents to control who has access to their child's data." The FTC framing — privacy-as-protection rather than restriction-as-protection — is notably different from Canberra's approach, and may produce a less blunt instrument. Sidley
For founders and operators building consumer products with any youth-adjacent component, the Australian experiment is the case study that will define the next decade of platform regulation globally. The question isn't whether governments will regulate. They will. The question is whether they'll learn from Canberra's specific mistakes — the privacy trade-off, the enforcement gap, the push to unregulated dark corners — or whether they'll replicate them at scale across 30 countries.
The case for the ban isn't nothing. Jonathan Haidt's 2024 book The Anxious Generation, which directly inspired South Australian Premier Peter Malinauskas to act, documents genuine evidence of social media's impact on adolescent mental health. The Queensland Chief Health Officer's assessment cited "compelling indications of possible negative links between unrestrained social media usage and the cognitive, emotional, and social wellbeing of young people." The political instinct to do something wasn't irrational. The specific mechanism chosen — age bans enforced through identity verification on individual platforms — may simply be the wrong tool for a real problem.
Key takeaways from Australia's first five months under the world's most ambitious social media ban:
More than 60% of affected Australian teenagers still access at least one banned platform. Snap alone has disabled 415,000 accounts. The privacy infrastructure required to enforce the ban creates centralised identity data that didn't exist before. Fifteen countries — including France, Malaysia, Indonesia, and Denmark — are actively considering similar laws. Reddit and the Digital Freedom Project are pursuing High Court challenges on constitutional grounds. And the man who built the most successful collaborative information project in internet history thinks the whole approach is a "really bad idea."
Wikipedia founder brands Australia's social media ban not as a failure of political will, but as a policy built on the wrong foundation. Age verification at platform level doesn't stop teenagers — it stops them briefly, while creating lasting privacy infrastructure and pushing them to less regulated corners of the internet. Those aren't unintended consequences. They were predicted consequences. That prediction is now a data point, not a theory.






