around-the-world

around-the-world

News Image
December 1, 2025 50

Australia’s Landmark Plan to Block Under-16s From Social Media

Australia is preparing to implement one of the world’s strictest regulations on social media use by children, banning under-16s from holding accounts on major platforms starting December 10, 2025. The policy, which has drawn global attention, is designed to curb the growing concerns around online harms, including mental health risks, exposure to inappropriate content, and addictive usage patterns among children and teenagers.

Unlike previous attempts in other countries that focused on parental controls or voluntary safeguards, Australia’s approach places legal responsibility squarely on social media companies themselves. Regulators insist this shift is critical to making age restrictions meaningful rather than symbolic.

Why Australia Is Taking This Step

The Australian government has cited mounting evidence linking excessive social media use with anxiety, depression, cyberbullying, and reduced attention spans among young people. Policymakers argue that existing safeguards have failed to protect children adequately, largely because platforms rely on self-reported ages that are easily falsified.

With hundreds of thousands of adolescents active on popular platforms, authorities believe stronger intervention is necessary. According to industry estimates, platforms such as Instagram alone host hundreds of thousands of Australian users aged between 13 and 15—many of whom will now be affected by the ban.

How the Age Ban Will Work

From December 10, social media companies will be required to remove accounts belonging to users under the age of 16 within Australia. These restrictions apply to account creation and ownership, not passive viewing.

Importantly, younger users will still be able to access some public content without logging in. What they can no longer do is create profiles, upload content, or interact through personal accounts.

Not every user will need to prove their age upfront. Instead, platforms must identify accounts suspected of violating the age limit. How that suspicion is triggered—through data patterns, account behaviour, or reporting—will largely be determined by each platform.

Verification and Enforcement

One of the most contentious aspects of the ban is how age verification will be enforced. The Australian government has deliberately avoided mandating a single verification method, arguing that flexibility will encourage innovation while reducing privacy risks.

Social media platforms are expected to use a mix of tools, including AI analysis, account history checks, and user-submitted proof. Meta, which owns both Facebook and Instagram, has already begun flagging accounts based on the age provided when profiles were created.

Users who are wrongly identified as under 16 will be able to appeal by submitting a video selfie or government-issued identification. However, privacy advocates remain concerned about data collection, storage, and potential misuse.

Which Platforms Are Affected

The ban covers many of the most widely used platforms among young people, including Facebook, Instagram, Snapchat, and TikTok. Streaming-oriented platforms such as Kick and Twitch are also included.

In a notable and controversial move, YouTube has been added to the list, despite earlier signals it might be exempt so children could continue accessing educational content.

Several popular services remain outside the ban—for now. Apps and platforms such as Roblox, Pinterest, and WhatsApp are currently exempt, though regulators have stressed that the list will be reviewed regularly and may expand over time.

Concerns About Workarounds

Australian authorities acknowledge that determined teenagers may attempt to bypass restrictions. Potential tactics include uploading fake identification documents, borrowing IDs from older individuals, or using AI-enhanced images to appear older.

Rather than dictating technical solutions, regulators expect platforms to develop their own systems to detect and prevent such behaviour. The national internet safety watchdog has admitted that no approach will be flawless, especially in the early stages.

“Of course, no solution is likely to be 100 percent effective all of the time,” the regulator noted in its guidance, emphasizing that gradual improvement is expected as systems mature.

Severe Financial Penalties

To ensure compliance, the new law gives regulators sharp enforcement tools. Platforms that fail to take “reasonable steps” to remove underage users could face fines of up to AUD 32 million.

What qualifies as “reasonable” remains intentionally broad. Regulators say this flexibility allows enforcement decisions to consider platform size, resources, and available technology. Critics, however, argue that vague definitions could lead to inconsistent enforcement or prolonged legal disputes.

Global Implications

Australia’s move is being closely watched by governments around the world. Countries in Europe, Asia, and the Middle East are tracking whether the ban reduces harm without creating excessive privacy risks or enforcement chaos.

If successful, Australia’s under-16 social media ban could become a blueprint for global regulation—reshaping how tech companies design platforms and how societies define children’s rights online.

Top