
Australian schools are being urged to take immediate action against the rising threat of deepfake technology, particularly the proliferation of ‘nudify’ apps that facilitate technology-facilitated sexual violence. The eSafety Commissioner, Julie Inman Grant, has called on education ministers to ensure that schools comply with state and territory laws and mandatory reporting obligations. This warning comes as reports of digitally altered intimate images, including deepfakes involving minors, have more than doubled in the past 18 months compared to the previous seven years.
According to eSafety, a significant majority of these reports involve young girls, highlighting a disturbing trend. While the exact number of reports remains undisclosed, Inman Grant cautioned that the reality might be even more severe than currently reported. “We suspect what is being reported to us is not the whole picture,” she stated, emphasizing the anecdotal evidence from school leaders that deepfake incidents are increasingly common.
The Deepening Crisis in School Communities
Deepfake technology, which uses artificial intelligence to manipulate images and videos, is described by eSafety as a “crisis affecting school communities across Australia.” These tools, particularly ‘nudify’ apps, are becoming more accessible to young people and are being misused to create non-consensual explicit images. The ease of use and accessibility of this technology pose significant risks, as it can cause profound personal harm.
“With just one photo, these apps can nudify the image with the power of AI in seconds,” Inman Grant explained. The misuse of these apps has led to cases of humiliation, bullying, and even sexual extortion among school children. Reports have also surfaced about these images being traded among students for money.
Expert Insights on the Normalization of Deepfakes
Asher Flynn, a criminology professor at Monash University specializing in AI-facilitated abuse, noted that the rise in reports to eSafety is “confronting, but not unexpected.” Flynn highlighted the emergence of ‘user-friendly’ deepfake creation tools and the normalization of creating such content. Her research indicates that sexualized deepfake abuse results in physical, psychological, social, reputational, and financial harms to victims.
“We are also seeing a range of motivations for the incidents, from sexual gratification, to intentionally causing harm, controlling or degrading the target of the image, right through to thinking it’s funny, building social status among peers, and curiosity in how the process of creating a deepfake works,” Flynn stated.
Shahriar Kaisar, a senior lecturer of information systems at RMIT University, emphasized the severe issue posed by deepfakes with the rise of generative AI. He warned that the rapid propagation of these images online, particularly among young people, occurs with limited visibility on platforms like messaging apps.
Regulatory Actions and Future Steps
In response to the growing threat, Inman Grant announced that the agency is collaborating with police, app developers, and hosting platforms to address the issue. New laws requiring global tech companies to tackle harmful online content, including deepfakes and ‘nudify’ apps, have been introduced in parliament. These laws include mandatory standards with penalties of up to $49.5 million for non-compliance, which come into effect this week.
“We will not hesitate to take regulatory action,” Inman Grant asserted. Flynn added that a multifaceted approach is necessary, involving accountability for platforms and creators, education, awareness-raising, and prevention resources to shift gender norms and expectations.
Kaisar supports a “holistic approach,” emphasizing the importance of raising awareness and fostering an ethical understanding of technology among school children. He noted, “We are working on the regulation and it was great the bill was passed last year. But the more important thing would be raising awareness and an ethical understanding of technology among school kids.”
eSafety has released an updated toolkit for schools on how to prepare for and manage deepfake incidents. Schools are strongly encouraged to report any potential criminal offenses to local police.
“I’m calling on schools to report allegations of a criminal nature, including deepfake abuse of underage students, to police and to make sure their communities are aware that eSafety is on standby to remove this material quickly,” Inman Grant urged. “It is clear from what is already in the public domain, and from what we are hearing directly from the education sector, that this is not always happening.”
If you or someone you know is impacted by family and domestic violence, call 1800RESPECT on 1800 737 732, text 0458 737 732, or visit 1800RESPECT.org.au. In an emergency, call 000. The Men’s Referral Service, operated by No to Violence, can be contacted on 1300 766 491.