7 September, 2025
australia-to-ban-ai-deepfake-and-nudification-apps-amid-safety-concerns

In a decisive move to enhance online safety, the Australian government has announced plans to ban AI applications that facilitate the creation of deepfake and nudification images. These technologies, which have become increasingly accessible, pose significant risks when used for criminal activities, including fraud and the generation of artificial child sexual abuse material (CSAM).

The announcement was made by Communications Minister Anika Wells on Tuesday, September 2. She emphasized the government’s commitment to restricting access to such harmful technologies, stating, “There is a place for AI and legitimate tracking technology in Australia, but there is no place for apps and technologies that are used solely to abuse, humiliate and harm people, especially our children.”

Government’s Proactive Stance

The move represents a significant step beyond existing legislation that already prohibits the distribution of non-consensual sexually explicit material and stalking. Minister Wells highlighted the government’s determination to “use every lever at our disposal” to curb the availability of nudification and undetectable online stalking apps.

Wells further noted that the government would collaborate with industry stakeholders to devise effective strategies for tackling these applications. “These new, evolving technologies require a new, proactive approach to harm prevention – and we’ll work closely with industry to achieve this,” she added.

Industry Support and Legislative Background

The Digital Industry Group Inc (DIGI), representing major tech companies, has expressed support for the government’s initiative. Dr. Jennifer Duxbury, DIGI’s director of regulatory affairs, policy, and research, stated, “DIGI welcomes strong action from Minister Anika Wells against nudification apps and online stalking tools to strengthen online safety protections for Australians.”

The legislative push to ban these applications was initially proposed by independent MP Kate Chaney in July, following a roundtable discussion on AI-facilitated child exploitation. The proposed legislation would make the possession of such apps a criminal offence, carrying penalties of up to 15 years in prison.

Challenges and Future Implications

While the government’s actions are a significant step forward, experts acknowledge that the challenge of abusive technology is complex. Julie Inman Grant, Australia’s eSafety Commissioner, previously highlighted the ease with which these technologies can be misused. “The rapid acceleration and proliferation of these really powerful AI technologies is quite astounding,” she remarked, noting that minimal resources are needed to create convincing deepfake images.

“You don’t need thousands of images of the person or vast amounts of computing power … You can just harvest images from social media and tell the app an age and a body type, and it spits an image out in a few seconds,”

The government’s initiative follows similar global trends where nations are grappling with the implications of AI technologies. The European Union, for instance, has been working on comprehensive AI regulations to address such issues.

Looking forward, the Australian government’s approach could serve as a model for other countries seeking to balance technological innovation with safety and ethical considerations. The collaboration with industry players is expected to yield practical solutions that can be implemented swiftly.

As the legislation progresses, stakeholders in the tech industry and civil society will be closely monitoring its implementation and impact. The hope is that these measures will significantly mitigate the risks associated with AI misuse, thereby fostering a safer online environment for all Australians.