Australia is set to introduce a world-first social media ban for children under the age of 16, effective from December 10, 2025. This landmark legislation, the Online Safety Amendment (Social Media Minimum Age) Bill 2024, marks a significant, and somewhat controversial, shift in how the nation approaches online safety for its youth.
HOW AGE VERIFICATION IS EXPECTED TO WORK
The new law mandates that specific social media platforms must take “reasonable steps” to prevent children under 16 from creating or maintaining accounts. The onus is squarely on the tech companies, not the child or their parents, with fines of up to AU$49.5 million for systemic failures.
Methods for age verification are not uniform but are expected to include an approach where platforms may use a combination of:
- Document-based Verification: Submitting a government-issued ID to a third-party verification service (the platform itself is prohibited from collecting government ID as the sole method).
- AI and Facial Age Estimation: Using biometric software to estimate age based on facial features.
- Tokenised Third-Party Checks: Verification through trusted intermediaries like mobile carriers or banks using a “double-blind token” system to confirm age without revealing identity to the platform.
Existing accounts held by under-16s are expected to be deactivated or removed unless they can be verified as being over the age limit.
THE BANNED PLATFORMS
The law applies to any service that allows users to interact socially, post, and share content. The platforms confirmed to be covered by the new age restrictions include:
- Facebook (Meta)
- Instagram (Meta)
- TikTok
- YouTube
- X (formerly Twitter)
- Snapchat
- Kick
Exemptions are generally provided for services with the primary purpose of health or education, as well as some direct messaging services like WhatsApp and Messenger Kids.
STREAMING AND COMMUNICATIONS: KICK VS TWITCH AND DISCORD
One of the more peculiar aspects — at least in my mind — of the ban is the inclusion of the live-streaming platform Kick, while the dominant platform in that space, Twitch, is not explicitly listed; despite their similar presentation and often identical content being provided, nor is the highly popular communications app Discord.

The decision to ban Kick while apparently excluding Twitch—a platform with similar, if not greater, exposure to unregulated live content—raises questions about the legislation’s consistency.
Added to this, the exclusion of Discord presents a more significant concern. By closing off traditional social media avenues, experts predict an inevitable influx of users under 16 to platforms like Discord. As a primarily private communications platform that features voice chat, direct messaging, and community forums, it is designed for rapid, real-time social interaction. If a mass migration of unverified minors occurs, Discord could quickly become the new major issue for online safety—a risk the current legislation appears to overlook.
WHY ALL TH E FUSS?
The Australian Government’s primary rationale for the social media ban is rooted in the critical protection of children’s mental health and wellbeing. The policy is a direct response to growing public and clinical concern over the digital environment minors navigate daily.
At its core, the legislation aims to mitigate online harm, aggressively reducing young users’ exposure to pervasive issues like cyberbullying, harmful content (including material related to sexual content, self-harm, pro-anorexia, and violence), and the ever-present threat of online predators. Beyond specific threats, the government is also focused on addressing platform design itself. Officials have targeted the “powerful, unseen forces of opaque algorithms and endless scroll” which are engineered to maximise engagement, often encouraging excessive screen time that is believed to negatively impact critical developmental periods for young people. Finally, the ban is intended as a measure to support parents. By establishing a definitive, legal boundary around social media access, the government seeks to ease the constant conflict and negotiation faced by families regarding screen time and online activity, giving parents a powerful legal tool to back up their household rules.
HAS THIS BEEN APPROACHED AND COMMUNICATED THE RIGHT WAY?
While the intent to protect children is widely supported, the execution and communication of the policy have faced scrutiny.
- Communication Gaps: Despite the law coming into effect soon, the final, binding list of all impacted platforms, and the precise mechanisms for age verification, have been slow to solidify. This lack of clarity creates confusion for parents, children, and the tech companies responsible for implementation. Could the government have adopted a clearer, industry-standard approach to age verification rather than allowing platforms to self-assess and choose their own “reasonable steps”?
- Focus on Existing Threats: By only targeting a specific list of traditional social media and one live-streaming platform, the law risks simply pushing the problem to adjacent, but unregulated, spaces like Discord, as noted above. Was there enough consultation on the broader digital ecosystem, particularly communication and gaming platforms that possess identical “social interaction” features?
- Potential for Isolation: Critics, including some human rights groups, argue that restricting access may inadvertently lead to isolation for some teens, disconnecting them from their established social networks. The government’s messaging has not fully acknowledged or provided resources to mitigate this potential “social penalty” for affected youth.

Ultimately, The new law is a powerful statement about a government taking control of digital regulation, but its success will ultimately hinge on its efficacy in the real world and whether the oversight is robust enough to adapt to the fluid landscape of online platforms.

