background image blur
background image
  • Blog
    >
  • News
    >
  • Australia's Social Media Ban Corners Big Tech While Privacy Stays at Risk

Australia's Social Media Ban Corners Big Tech While Privacy Stays at Risk

Dominykas Zukas author photo
By Tech Writer and Security Investigator Dominykas Zukas
clock icon
Last updated: 31 March, 2026
A court table with Australian flag behind it

Over three months after Australia's world-first under-16 social media ban took effect, surveys found that roughly seven in ten children who previously had accounts on Facebook, Instagram, Snapchat, or TikTok still had one. The platforms had all publicly pledged to comply with the ban. Then they let kids retry age checks until they passed.

Of course, that's now catching up with them. Australia's eSafety Commissioner has formally opened investigations into Facebook, Instagram, Snapchat, TikTok, and YouTube for potential non-compliance with the ban that came into force on December 10, 2025, identifying specific failures in how each platform implemented age assurance.

Caught Red-Handed With the Loopholes Wide Open

The eSafety compliance report doesn't deal in vague concerns. Platforms were letting children who had already declared ages under 16 attempt fresh age-assurance checks repeatedly until they got a 16+ result. Some platforms only triggered age-assurance measures after a user tried to change their declared age, rather than at sign-up, making it likely many children created accounts by simply typing a different birth year. None of the five had effective pathways for users to report underage accounts.

Since the ban launched, 4.7 million accounts were deactivated in the first two days, with another 300,000 blocked from being created since. Yet a substantial proportion of Australian children are still on these platforms, and the Commissioner has made clear her office is moving into an enforcement stance. The five platforms face fines of up to $49.5 million if found to have failed to take reasonable steps, with a decision on legal action expected by mid-year.

Communications Minister Anika Wells said she expects eSafety to "throw the book" at companies that have systematically failed to comply, accusing the platforms of using tactics "right out of the big-tech playbook" to undermine Australian law. The five platforms not currently under investigation, Reddit, X, Kick, Threads, and Twitch, have been assessed as showing sufficient compliance for now.

The Price of Compliance Is Everyone's Data

I think it's worth saying that watching big tech get held to a promise it made in public, in writing, is genuinely satisfying. But while the accountability half of this story is the nice part, the other half is harder to celebrate.

As we talked many times before, to verify who is under 16, platforms must verify who isn't, which means collecting biometric data, behavioral inference signals, or identity documents from every Australian user regardless of age. The law prohibits platforms from collecting government-issued ID directly, pushing them toward "reasonable alternatives," including facial age estimation, behavioral inference, and, worst of all, third-party verification service providers, all of which still generate large-scale sensitive data sets tied to individual users.

Reddit's ongoing legal challenge against the ban specifically cites this as the core problem, arguing that any age-assurance regime creates breach and hack risks for all users, not just minors. The Australian Human Rights Commission flagged the same concern before the ban passed, warning that age-assurance processes present serious privacy risks for everyone, given the scale of collection involved.

And, I mean, it’s not like we don’t already have dozens of examples of such systems backfiring. But Australia is now mandating this kind of infrastructure across ten major platforms nonetheless, and, as the AI and search engine crackdown makes clear, the scope keeps expanding.

Hold Them Accountable, Then Hold the Method Accountable Too

Making platforms answer for two decades of treating teenagers as monetization targets, after publicly committing to follow the law and then quietly building workarounds, is the right instinct. I'll say that clearly and without reservation. And yet, demanding compliance through mass biometric collection from 26 million people is building a surveillance infrastructure under a child-safety label.

The question Australia hasn't answered is why its $49.5 million enforcement lever isn't being used to require privacy-preserving compliance methods rather than just compliance by any means the platforms choose. If the goal is actually protecting children rather than normalizing mass identity verification across Australian internet infrastructure, that distinction matters enormously, and right now it doesn't seem to matter at all.


Share on
Facebook share Twitter share Reddit share Linkedin share

Be part of the resistance, quietly.

Get Mysterium VPN Arrow icon
awareness campaign banner img
Dominykas Zukas author photo
Dominykas Zukas
Tech Writer and Security Investigator

Dominykas is a technical writer with a mission to bring you information that will help you in keeping your digital privacy and security protected at all times. If there's knowledge that can help keep you safe online, Dominykas will be there to cover it.

Read more by this author
© Copyright 2026 UAB "MN Intelligence"