In late 2025, Australia is scheduled to do something that no other country has dared to finish. Draw a hard line between childhood and the digital marketplace.
The Age Assurance Technology Trial, which began quietly and without fanfare, is now being scrutinised, contested, and misrepresented…. sometimes in equal measure. It aims to do one thing, enforce a ban on under-16s from using some social media, by finally answering a question that platforms have danced around for two decades, How old is the person behind the screen?
It sounds simple. It’s not.
To understand the urgency of this trial, you must first understand the environment it’s responding to. This is not a theoretical policy. This is a consequence. What we are witnessing is the slow reckoning of a generation raised in the slipstream of algorithmic attention, where identity is fluid, surveillance is ambient, and childhood is not protected but mined. The evidence is not subtle.
Rates of anxiety, depression, and self-harm in adolescents have surged since 2010, when smartphones and social media became ubiquitous In Australia, eSafety reports that almost half of children aged 8 to 12 are on social media despite platform age minimums. The numbers are worse overseas. The platforms know it. Governments know it. The kids know it too.
Until now, age gates were a joke. “Are you 13?” was a digital ritual with no teeth. But this trial changes the question. It doesn’t ask if you’re old enough. It verifies it.
Biometric facial analysis, government ID checks, behavioural cues, and even blockchain-based verifiability are not tools of control for their own sake. They are a last resort in a policy vacuum that has asked parents to guard a perimeter that doesn’t exist. No mother can stand between their child and TikTok’s server farm in Singapore. No father can out-code Instagram’s recommendation engine. You cannot parent your way out of a design that treats children as data points in a behavioural economy.
Critics argue the technologies are flawed. They are. Accuracy varies. Ethnic bias in facial age estimation systems should be declared transparently and any large scale rollout must address this. But the flaw is not in the intent. The flaw is in the inertia that left parents to fight this battle alone for so long. Any tool that aims to place even partial friction between vulnerable users and profit-maximising systems deserves scrutiny, yes….. but also credit. It is easier to do nothing. It is politically safer to pass the problem sideways, which is what the United States and Britain have done for over a decade.
Australia is not pretending this is perfect. In 2023, the federal government rejected mandating age verification for pornography sites, citing technological immaturity and privacy concerns. That wasn’t a cop-out. It was a warning. The difference now is that the tech has improved, and the stakes are even higher.
But here's where it gets complicated. The people building the verification systems aren’t the ones who control access to the platforms. Meta, Snap, TikTok, X, Reddit, these are the landlords of the digital high-rise. They decide who gets in and who stays. And so far, they’ve kept the lease terms vague.
All of them have something to lose if they admit that children drive a significant portion of engagement. And that’s what this threatens: not their public image, but their business model.
Still, criticism that the system can be evaded misses the point. Of course it can. No border is impenetrable. But that’s not an argument against a border. It’s an argument against pretending the status quo is defensible. In the same way laws against underage drinking or driving don’t eradicate the behaviour but set cultural and legal boundaries, this trial is not about perfect enforcement. It’s about taking the first credible step toward drawing the line.
There are real risks. Privacy must be guarded, data must not be hoarded, and transparency must be enforced. These systems must be tested not just for accuracy, but for unintended consequences. Oversight must be independent. But the alternative of doing nothing, and waiting for platforms to self-regulate is not a plan. It is surrender.
While some tech companies claim this is government overreach, many of them are doing something more insidious. They are building tools that mimic social media but don’t trigger the regulations. Google’s new AI companions for under-13s fall outside the ban’s scope but replicate the psychological mechanics of a social network. They simulate friendship, attention, conversation. That’s not an accident.
What the Age Assurance trial recognises, and what most critics miss, is that platforms don’t just connect people. They shape brains. They reward exhibition. They punish nuance. They amplify beauty and filter out the ugly parts of growing up. They’re not neutral environments. They are high-stakes laboratories of self-worth, identity, and emotion and kids are the guinea pigs.
So no, this isn’t a perfect policy. It will need correction. It will need reform. But it is a policy that does something essential. It creates a mechanism for saying no. Not to the child. But to the corporations who have for too long said yes to every click, every swipe, every fragment of data, no matter who it came from.
Childhood is not a UX problem. It is not a metric to be optimised. It is a developmental stage of life that deserves the full force of protection from a society that has finally woken up to the price of its own digital negligence.
Australia is no longer waiting for Silicon Valley’s permission. That matters.
Because if we get this right, we won’t just be verifying age. We’ll be reclaiming responsibility.