Australia is set to implement a groundbreaking ban on social media for children under the age of 16, marking a world-first policy aimed at safeguarding young users from online harm. From December 10, social media companies will be required to take “reasonable steps” to ensure that under-16s in Australia cannot create new accounts, and existing accounts must be deactivated or removed.
The government has positioned this ban as a protective measure to reduce the “pressures and risks” associated with social media use among children. These risks often stem from platform design features that encourage prolonged screen time and expose users to potentially harmful content.
Understanding the Scope of the Ban
The Australian government has identified ten major platforms that will be affected by the ban: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, and streaming platforms Kick and Twitch. This move comes amid growing concerns about the exposure of children to harmful content and behaviors on these platforms.
A study commissioned by the government earlier this year revealed that 96% of children aged 10-15 used social media, with 70% encountering harmful content. This includes exposure to misogynistic material, violent videos, and content promoting eating disorders and suicide. Alarmingly, one in seven children reported experiencing grooming behavior from adults or older children, and over half had been victims of cyberbullying.
Platforms Under Pressure
While the ban currently targets social media platforms, there is increasing pressure to extend it to online gaming. In response, gaming platforms like Roblox and Discord have begun implementing age verification checks to potentially avoid inclusion in the ban. The government will continue to review the list of affected platforms based on criteria such as the platform’s purpose, user interaction capabilities, and content posting functionalities.
Enforcement and Compliance Challenges
Social media companies, rather than children or parents, will bear the responsibility of enforcing the ban. Companies face fines up to $49.5 million (US$32 million, £25 million) for serious or repeated breaches. They are expected to employ age assurance technologies, though the government has not specified which methods should be used.
Potential age verification methods include government IDs, facial or voice recognition, and age inference, which estimates age based on online behavior. However, the government has prohibited reliance on self-declared ages or parental verification.
“It takes Meta about an hour and 52 minutes to make $50 million in revenue,” noted former Facebook executive Stephen Scheeler, questioning the adequacy of the fines.
Meta has announced that it will begin closing teen accounts from December 4, allowing those mistakenly removed to verify their age through government ID or video selfies. Other platforms have yet to disclose their compliance strategies.
Potential Effectiveness and Criticisms
There is skepticism about the effectiveness of the ban, particularly regarding the reliability of age assurance technologies. The government’s own report indicated that facial assessment technology is least reliable for the demographic it aims to protect. Critics also question whether the fines are substantial enough to ensure compliance.
Some argue that the ban might not significantly reduce online harm, as it excludes dating websites, gaming platforms, and AI chatbots, which have been criticized for inappropriate interactions with minors. Others believe that educating children on navigating social media would be more effective than a blanket ban.
Communications Minister Annika Wells acknowledged potential imperfections in the ban, stating, “It’s going to look a bit untidy on the way through. Big reforms always do.”
Data Protection Concerns
There are also concerns about the data collection required for age verification. Australia has experienced several high-profile data breaches in recent years, raising fears about the mishandling of sensitive personal information.
The government assures that the legislation includes “strong protections” for personal information, stipulating that data may only be used for age verification and must be destroyed afterward, with serious penalties for breaches. Platforms are also required to offer alternatives to government IDs for age verification.
Global Context and Reactions
The ban has drawn significant attention worldwide, as no other country has implemented a similar comprehensive restriction. Various countries have adopted different approaches to limit children’s screen time and exposure to harmful content, but none have enacted a total ban on social media platforms.
In the UK, new safety rules introduced in July impose large fines or potential jail time for executives if companies fail to protect young users from harmful content. European countries like France and Denmark are considering similar age restrictions, while a US attempt to ban under-18s from social media without parental consent was blocked by a federal judge.
Social media companies have expressed concerns about the ban’s implementation challenges, potential privacy risks, and its impact on young users’ social interactions. Some platforms, like YouTube, are contemplating legal challenges, while others, including TikTok and Snap, have committed to compliance despite opposition.
Teens’ Reactions and Workarounds
Many teenagers are reportedly creating new accounts with fake ages in anticipation of the ban, despite government warnings to social media companies to identify and remove such accounts. Online forums are filled with discussions on alternative social media apps and tips for bypassing the ban.
Some teens have resorted to joint accounts with their parents, and a rise in VPN usage is predicted, mirroring the response in the UK following similar age control measures.
As Australia embarks on this unprecedented regulatory path, the world will be closely observing its outcomes and implications for the future of social media governance.