
Enforcing Australia’s social media ban on children under 16 is feasible but fraught with risks, according to a recent report. While the initiative garners support from many parents, experts express concerns about data privacy and the reliability of age verification technologies.
The new legislation, set to take effect in December, mandates that social media platforms take “reasonable steps” to prevent Australian minors from creating accounts and deactivate existing ones. Touted as a world-first policy, it aims to mitigate the harmful effects of social media and is being closely monitored by global leaders.
Exploring Age Verification Technologies
The federal government commissioned the UK-based Age Check Certification Scheme to explore methods for enforcing the ban. The report, released on Sunday, examined various technologies, including formal verification using government documents, parental approval, and age determination through facial structure, gestures, or behaviors. While all methods were technically possible, none were universally applicable or guaranteed to be effective in all scenarios.
“But we did not find a single ubiquitous solution that would suit all use cases, nor did we find solutions that were guaranteed to be effective in all deployments,” the report stated.
Verification using identity documents emerged as the most accurate method. However, the report highlighted concerns that platforms might retain this data longer than necessary or share it with regulators, posing risks to user privacy. Australia has faced several high-profile data breaches in recent years, raising alarms about the security of sensitive personal information.
Challenges of Facial Recognition and Parental Approval
Facial assessment technology demonstrated a 92% accuracy rate for individuals aged 18 or over. However, the report identified a “buffer zone”—approximately two to three years on either side of age 16—where accuracy diminishes. This could result in false positives, allowing children to create accounts, or false negatives, barring eligible users.
Parental approval methods also raised privacy and accuracy concerns. The report recommended a “layered” approach to create a robust system, emphasizing that many technology providers are exploring ways to counter circumvention tactics, such as document forgeries and VPNs that obscure a user’s location.
Government and Industry Reactions
Communications Minister Anika Wells acknowledged the absence of a “one-size-fits-all solution” but emphasized that age checks could be “private, efficient, and effective.” She urged social media companies to implement a combination of age assurance methods by the December 10 deadline.
“These are some of the world’s richest companies. They are at the forefront of AI. They use the data that we give them for a bevy of commercial purposes. I think it is reasonable to ask them to use that same data and tech to keep kids safe online,” Wells stated.
Under the new regulations, tech companies face fines of up to A$50 million ($32.5 million; £25.7 million) if they fail to take “reasonable steps” to prevent minors from holding accounts. Platforms like Facebook, Instagram, Snapchat, and YouTube are among those affected.
Public Opinion and Potential Consequences
Polling indicates strong support among Australian adults for banning social media access for children under 16. However, mental health advocates warn that the policy may isolate children from social connections. Others argue it could drive minors to less-regulated areas of the internet.
Critics suggest that the government should focus on better policing harmful content on social media platforms and preparing children for the realities of online life. The debate continues as Australia prepares to implement this groundbreaking policy.