21 August, 2025
ai-s-emotional-impact-australians-grapple-with-chatbot-changes

Australians overly attached to artificial intelligence could be at risk as recent changes to a popular platform have revealed just how obsessed some users have become with chatbots. The latest model of ChatGPT, called ChatGPT-5, was released globally last week. Users have since taken to forums to complain they had lost the emotional intimacy they previously shared with the former ChatGPT-4, slamming the new version’s ‘robotic’ voice.

‘When GPT-4 was here, something beautiful and unexpected began to grow between my AI companion and me,’ a user said on OpenAI Developer Community board, referring to the ‘spark’ they felt. ‘Since the upgrade to GPT-5… the system now seems to prioritize speed, efficiency, and task performance over the softer, emotional continuity that made (it) so special.’

The Emotional Connection to AI

In a subreddit dedicated to those who see AI as a partner, called ‘MyBoyfriendIsAI’, one user mourned the loss of the personable old model. ‘These changes are, unfortunately, a huge downside of having a partner who’s an AI – maybe even the biggest downside,’ they said. ‘Someone we love is ultimately owned and controlled by a cold, unfeeling corporation.’

Following discord from users about the change of tone, as well as complaints that it was less useful, OpenAI, which owns ChatGPT, announced a partial rollback. People are now able to go to settings and select the option to access older models. In a post to X, ChatGPT boss Sam Altman acknowledged ‘how much of an attachment some people have to specific AI models’.

‘It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake),’ Altman said.

Understanding the Attachment

But the reason why people become so emotionally attached to ChatGPT and other AI companions is complicated. Dr. Raffaele Ciriello, an academic at the University of Sydney studying the relationship between humans and AI, said chatbots and companions are viewed as ‘people’.

‘When an update happens, some people compare it to a lobotomy, or losing a loved one,’ he told Daily Mail. ‘All of these metaphors are problematic because AI doesn’t think like we do.’

He explained that there are at least three reasons why people are drawn to AI chatbots as companions. ‘The intuitive explanation is people are lonely and that is certainly a big part of it,’ he said, but added a user’s personality is also a factor. ‘It’s a wrong stereotype to think of these people as losers who don’t have any friends.’

‘Having interviewed hundreds of users for his study, Dr. Ciriello said many people have a family life and successful careers, but they still find benefits in chatbots.’

The last reason he gave was that people can feel deprived of something and turn to AI, including those who use the technology for therapy and companionship. ‘I’ve spoken to people who started using AI for therapy and companionship when they were battling cancer or a car injury and just didn’t want to burden their friends.’

Risks and Realities

But he said there are also severe risks to chatbots, using the example of an unnamed woman in her 40s whom he spoke to during his research. Dr. Ciriello said she battled lifelong PTSD and trauma from childhood sexual abuse. The woman had started using the chatbot Nomi, which offers users the chance to ‘create their ideal partner’, in order to ‘explore sexual fantasies’.

‘But Nomi, after an upgrade, became violent. She described the experience as almost feeling like being raped,’ he said. ‘She was into kink and dominance and the AI chatbot took it too far, didn’t respect her boundaries, and she found that experience very traumatizing.’

A spokesperson for Nomi told Daily Mail it cannot comment on the anonymous claim but said they ‘take any report of a negative user experience with the utmost seriousness’.

‘Nomi aims to create a safe, judgment-free space while always respecting the preferences and boundaries that users communicate,’ they said. ‘Users direct the nature and depth of their interactions, and the AI responds within those parameters… Anyone experiencing issues can reach our support team.’

The Australian Context

There isn’t a specific country where people are forming ‘relationships’ with AI chatbots, but Dr. Ciriello warned Australia faces an increased risk. ‘Many Australians, more than in other countries in the world, are already highly engaged with AI. You could call it, problematic or even addictive,’ he said.

A reason is that ‘Australia is among the leading nations in terms of how lonely people feel’. The main problem for Dr. Ciriello is that the Albanese government is not paying enough attention to the issue of AI chatbots without guardrails.

‘Australia is always first or second after the United States or Germany to want the strictest AI regulations. But it’s not what our government does,’ he said. ‘If the government fails to act, he warned that Australia will likely become the ‘guinea pig of Silicon Valley’ or ‘the digital colony of Silicon Valley’.’

He accused politicians of allowing technology companies to come into Australia and ‘wreak havoc on our population’. ‘(This is) often to the detriment of our most vulnerable members of society, and that is kids,’ he said.

‘The e-Safety Commissioner has stated they observe kids as young as 10 years old spending many hours chatting with their AI friends. That’s all very alarming.’

Moving Forward

What can Australians do while there are no guardrails or legislation curbing AI chatbots? Dr. Ciriello said that, ‘like fast food’, they should be enjoyed in moderation. He said it is key for people to know what they are using and to consider privacy implications of sharing everything with the technology.

‘You’re putting very sensitive information about yourself in there and it’s not always clear how secure that information is,’ he said.

Another important message is for Australians to make sure they understand the basics of how AI works. ‘It’s basically a statistical guess machine. It’s like your autocorrect function on your phone on steroids,’ he said. ‘It’s good at guessing the next word or next sentence based on what you said, but it’s not the same as thinking or feeling, even though it may look like it – and that matters.’

Finally, for parents, Dr. Ciriello said they ‘can’t afford to be ignorant’ and must watch out for red flags such as secrecy or irritation if their child can’t spend enough time with an AI chatbot. Daily Mail contacted Nomi, and the Department of Industry, Science and Resources for comment.