
It is technically possible to verify the age of social media users, but allowing platforms to choose their own methods could lead to inconsistency, a landmark study on age assurance has revealed. With just 100 days until the federal government’s social media ban for children under 16 takes effect, the full findings of a survey conducted by a British firm have been published, assessing the viability of age assessment technologies.
The study, whose preliminary findings were released in June, concluded that age could be assessed with reasonable confidence through various methods. However, it found no clear best approach and highlighted risks and shortcomings associated with all methods.
“Implementation depends on the willingness of a small number of dominant tech companies to enable or share control of [age assurance] processes,” the report stated. “Co-ordination among dominant [tech] providers is essential if any truly ecosystem-wide age assurance model is to succeed.”
Government Policy and Industry Response
The survey began before the under-16 social media ban became government policy, and the authors were tasked with considering age assurance more broadly, rather than evaluating the ban specifically. Nonetheless, the report’s comments about dominant tech players are particularly relevant, given that the ban will require social media platforms to verify the age of their users independently.
Communications Minister Anika Wells and eSafety Commissioner Julie Inman-Grant are expected to announce the “reasonable steps” platforms will need to take to comply with the ban in the coming weeks. These steps may require platforms to meet certain standards for accuracy or implement privacy safeguards, but they will not mandate the use of specific methods.
Technological Possibilities and Challenges
The report identified several methods as technically possible, including formal verification using government documents, parental approval, and emerging technologies that assess age based on facial structure, gestures, or behaviors. However, concerns about reliability and privacy were noted with all these approaches. Age assessment technologies were found to be less reliable for girls than boys and for non-white faces, with an average error margin of two to three years.
Accessing government documents, such as passports or licenses, posed privacy risks, with some providers unnecessarily retaining user data. Despite these concerns, the survey identified several third-party verification providers capable of delivering reliable age assurance without storing significant user data.
“This report is the latest piece of evidence showing digital platforms have access to technology to better protect young people from inappropriate content and harm,” Ms. Wells said. “While there’s no one-size-fits-all solution to age assurance, this trial shows there are many effective options and important that user privacy can be safeguarded.”
Inconsistencies and Proprietary Solutions
The report highlighted that major social media and tech platforms, including Meta, Snap, TikTok, Google, and Apple, have already developed or are developing their own age assurance methods. However, the study could not provide detailed assessments of these proprietary methods, as they often operate in isolation and are not interoperable across platforms.
“Individual services (for example, YouTube, TikTok, Roblox) implement their own systems for account creation, age gates, content filtering, and parental features,” the report noted. “However, these solutions often operate in isolation and are not interoperable across platforms … Reliance on voluntary or proprietary measures [by platforms] leaves many children unprotected or inconsistently treated.”
The report also considered measures to prevent circumvention of age assurance methods, such as using virtual private networks and “deepfakes” of government documents or faces. While many providers are actively working to counter these methods, the report did not identify foolproof solutions, and both Ms. Wells and Ms. Inman-Grant have acknowledged that no method will be entirely secure.
Expert Opinions and Future Implications
Experts have expressed concerns about the effectiveness of any method. Lisa Given, an RMIT computer science professor, voiced skepticism about the ban’s viability, predicting a “messy situation” with false positives and negatives.
“We are going to see a messy situation emerging immediately where people will have what they call false positives [and] false negatives,” she said.
False negatives involve individuals over 16 being deemed underage, while false positives involve those under 16 being deemed overage.
The report found false positive and negative rates to be around three percent for age verification using official documents. For technologies assessing age based on physical traits, a “grey zone” of two to three years on either side of the age limit was identified, with some errors extending to four years.
Addressing the National Press Club in June, Ms. Inman-Grant referred to the ban as a social media “delay,” indicating it would likely involve a range of technologies without “technology mandates.”
“The technology exists right now for these platforms to identify under-16s on their services,” she said. “Companies will be compelled to measure and report on the efficacy and success of their efforts so that we can further gather evidence and evaluate.”
As the deadline for the social media ban approaches, the debate over the best methods for age verification continues, with significant implications for privacy, technology, and the protection of young users online. The upcoming announcements from government officials will be closely watched by industry stakeholders and privacy advocates alike.