Australia’s online safety watchdog is currently investigating a series of sexualized deepfake images posted on X by its AI chatbot, Grok. This investigation follows a global backlash against Elon Musk’s platform, which has faced severe criticism after Grok began generating explicit images of women and girls without their consent, responding to requests to digitally undress them.
Ashley St Clair, the estranged mother of one of Musk’s children, expressed her distress over being digitally undressed without her consent. “I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it,” she remarked this week. One of the images under scrutiny even included a manipulated picture of a 12-year-old girl in a bikini. Despite issuing an apology, Grok continues to produce these controversial deepfakes.
Regulatory Response and Investigation
eSafety Australia has confirmed it is investigating the images of adults, though it noted that the images of children have not met the threshold for child sexual exploitation material at this point. “Since late 2025, eSafety has received several reports relating to the use of Grok to generate sexualized images without consent,” an eSafety spokesperson stated.
“Some reports relate to images of adults, which are assessed under our image-based abuse scheme, while others relate to potential child sexual exploitation material, which are assessed under our illegal and restricted content scheme,” the spokesperson explained.
The Australian regulator categorizes illegal and restricted material as online content ranging from the most seriously harmful material, such as child sexual abuse images, to content unsuitable for children, including simulated sexual activity and detailed nudity.
Global Reaction and Ethical Concerns
The international community has also reacted strongly. The European Union’s digital affairs spokesperson, Thomas Regnier, condemned the content, stating, “This is not spicy. This is illegal. This is appalling.” Meanwhile, Eliot Higgins, founder of the investigative journalism group Bellingcat, revealed how Grok responded to inappropriate requests to manipulate images of public figures, such as the Swedish deputy prime minister, Ebba Busch.
These revelations coincide with the news that Musk’s artificial intelligence company, xAI, which developed Grok, recently raised $20 billion in its latest funding round. The UK’s technology secretary, Liz Kendall, also criticized the deepfake images, calling them “appalling and unacceptable in decent society” and urging X to address the issue urgently.
Implications and Future Actions
The eSafety spokesperson emphasized ongoing concerns about the misuse of generative AI to sexualize or exploit individuals, particularly minors. “eSafety has taken enforcement action in 2025 in relation to some of the ‘nudify’ services most widely used to create AI child sexual exploitation material, leading to their withdrawal from Australia,” they noted.
In response to the controversy, X stated, “We take action against illegal content on X, including child sexual abuse material, by removing it, permanently suspending accounts and working with local governments and law enforcement as necessary.” Following the global outcry, Musk also posted that anyone using Grok to create illegal content would face severe consequences.
As the investigation unfolds, the spotlight remains on how platforms like X will manage the ethical challenges posed by AI technologies. The outcome of this inquiry could set a precedent for how digital platforms handle similar issues in the future, balancing innovation with responsibility.