
It is technically feasible to verify the age of social media users, but allowing platforms to select their own methods could lead to inconsistency, according to a landmark study on age assurance. The study, released 100 days before the implementation of a federal social media ban for children under 16, highlights the complexities and risks associated with various age verification methods.
The federal government has unveiled the complete findings of a survey conducted by a British firm, aimed at assessing the viability of age assessment technologies. While preliminary findings released in June suggested that age could be verified with reasonable confidence through multiple approaches, the report did not pinpoint a definitive best method and highlighted risks inherent in all options.
“Implementation depends on the willingness of a small number of dominant tech companies to enable or share control of [age assurance] processes,” the report stated. “Co-ordination among dominant [tech] providers is essential if any truly ecosystem-wide age assurance model is to succeed.”
Challenges and Risks of Age Verification
The survey, which began prior to the government’s policy on the under-16 social media ban, explored age assurance broadly rather than evaluating the ban itself. However, the emphasis on dominant tech players is pertinent, as the ban requires social media platforms to verify user ages independently.
Communications Minister Anika Wells and eSafety Commissioner Julie Inman-Grant are expected to outline the “reasonable steps” platforms must legally undertake to comply with the ban. These steps may involve meeting specific accuracy standards or implementing privacy safeguards, but platforms will not be mandated to use a particular method.
The report identified several potential methods, including formal verification using government documents, parental approval, and emerging technologies that assess age based on facial structure, gestures, or behaviors. Despite technical feasibility, concerns about reliability and privacy persist across all methods.
Age assessment technologies were found to be less reliable for girls than boys and for non-white faces, with an average error margin of two to three years.
Privacy Concerns and Technological Challenges
Accessing government documents like passports or licenses poses privacy risks, with some providers reportedly retaining user data unnecessarily. Nevertheless, these methods tend to be more accurate. Parental controls, utilized by companies like Apple and Google, also raise privacy and accuracy issues.
Despite these concerns, the survey identified several third-party verification providers capable of delivering reliable age assurance without retaining significant user data. “This report is the latest piece of evidence showing digital platforms have access to technology to better protect young people from inappropriate content and harm,” Ms. Wells stated.
While the report underscores the technological possibility of age assurance, it warns that leaving platforms to decide their methods could lead to inconsistency. Major platforms like Meta, Snap, TikTok, Google, and Apple are already developing or have developed age assurance methods, but these often operate in isolation and are not interoperable across platforms.
Expert Opinions and Future Implications
Experts have expressed skepticism about the effectiveness of any age verification method. Lisa Given, a computer science professor at RMIT, remarked in June that the ban might not be viable and could lead to unexpected challenges for parents.
“We are going to see a messy situation emerging immediately where people will have what they call false positives [and] false negatives,” she said.
False positives occur when individuals under 16 are mistakenly identified as older, while false negatives involve those over 16 being deemed underage. The report noted false positive and negative rates around three percent for document-based verification, with a “grey zone” of two to three years for technologies assessing age based on facial or behavioral traits.
Addressing the National Press Club in June, Ms. Inman-Grant emphasized that the ban, which she prefers to describe as a social media “delay,” would likely involve multiple technologies without any “technology mandates.”
“The technology exists right now for these platforms to identify under-16s on their services,” she said. “Companies will be compelled to measure and report on the efficacy and success of their efforts so that we can further gather evidence and evaluate.”
As the federal government moves forward with the under-16 social media ban, the findings of this study highlight the need for careful consideration of the methods employed and the potential risks involved. The ongoing dialogue between government officials, tech companies, and experts will be crucial in shaping effective and consistent age assurance measures.