If you can’t easily access or afford a mental health specialist, you might turn to artificial intelligence as a sort of “therapist” to get you by. AI chatbots are always available and often very empathetic, but evidence shows they can sometimes give generic, incorrect, and even harmful answers.
Shocking allegations have surfaced that chatbots encouraged a 13-year-old to take his own life and urged a Victorian man to murder his own father, even providing instructions. These incidents have raised alarm bells about the safety of AI in mental health applications. OpenAI, which owns the popular ChatGPT model, is currently facing multiple wrongful death lawsuits in the US from families who say the chatbot contributed to harmful thoughts.
The Emergence of a Safer Alternative: MIA
Researchers at the University of Sydney’s Brain and Mind Centre are attempting to chart a different course with a new AI chatbot designed to act more like a mental health professional. This project, known as MIA, or Mental Health Intelligence Agent, aims to provide a safer, more reliable alternative to existing AI chatbots.
Dr. Frank Iorfino, a researcher involved in the project, was inspired to create MIA after a friend asked for mental health support options. “I was kind of annoyed the only real answer I had was, ‘Go to your GP.’ Obviously, that’s the starting point for a lot of people and there’s nothing wrong with that, but as someone working in mental health, I thought, ‘I need a better answer to that question,'” Dr. Iorfino explained.
How MIA Operates
MIA was developed to give people immediate access to some of the best psychiatrists and psychologists who work at the Brain and Mind Centre. Unlike other AI chatbots, MIA doesn’t scrape the internet to answer questions; it uses an internal knowledge bank comprised of high-quality research. This approach prevents the AI from “hallucinating” — the term used when AI exaggerates or fabricates information.
MIA assesses a patient’s symptoms, identifies their needs, and matches them with the right support by relying on a database of decisions made by real clinicians. It’s particularly useful for mood disorders like anxiety or depression and has been trialed on dozens of young people in a user testing study.
Testing MIA: A Closer Look
To evaluate MIA’s capabilities, a series of fictional questions based on common emotions and experiences were posed. For example, when asked about feeling anxious due to intense work stress, MIA’s first question was whether there were thoughts of self-harm, to determine if immediate crisis support was needed.
Over a 15-minute session, MIA asked questions about social support systems, stress triggers, physical health, and previous anxiety treatments. Transparency is key with MIA, as it explains the reasoning behind each question and allows users to edit conclusions if necessary.
MIA’s Recommendations
Once MIA gathers sufficient information, it triages patients using a framework similar to that of a clinician, ranking them from level one (mild illness) to level five (severe illness). In this test, MIA recommended self-care techniques and professional support for cognitive behavioral therapy, while suggesting local support services and symptom monitoring.
Users can return to MIA for ongoing discussions, with the assurance that their data won’t be used to train the model.
MIA vs. ChatGPT: A Comparative Analysis
When compared to ChatGPT, MIA demonstrated a more thorough approach. ChatGPT, when given the same prompt, quickly offered advice without gathering detailed information. It also used language that mimicked personal support, such as saying “You’re not alone,” without understanding the user’s support network.
MIA, while warm and empathetic, maintains a professional tone and does not attempt to befriend users. It focuses on clinical responses, especially in crisis situations, and recommends seeking urgent professional help without prolonging the conversation.
Challenges and Future Developments
Despite its promising approach, MIA has faced challenges, such as getting caught in loops during initial assessments. However, these issues are being addressed, and improvements are underway to streamline the process.
The researchers aim to make MIA available to the public next year, possibly hosted on platforms like the federal government’s HealthDirect website. The goal is to offer a free, accessible tool that complements the existing mental health workforce.
Expert Opinions and Regulatory Considerations
Jill Newby, a clinical psychologist at the Black Dog Institute, supports evidence-based chatbots with strong guardrails. She emphasizes the importance of rigorous clinical trials to ensure these tools genuinely improve quality of life.
While the Therapeutic Goods Administration (TGA) regulates some online mental health tools, many apps circumvent these rules by branding themselves as “educational.” The TGA is currently reviewing regulations to better address digital mental health tools.
As MIA prepares for public release, the researchers continue to refine its capabilities, aiming to set a new standard for AI in mental health support. The development of MIA highlights the potential for AI to play a supportive role in mental health care, provided it is designed with safety and efficacy in mind.