Many individuals are turning to generative artificial intelligence (AI) tools like ChatGPT for health advice due to their ability to provide quick, confident, and personalized responses. This trend is gaining momentum as several AI companies, including OpenAI, have launched dedicated “health and wellness” tools. The most notable among these is ChatGPT Health, unveiled earlier this month.
ChatGPT Health promises to offer more personalized answers by allowing users to link medical records and wellness apps, upload diagnostic imaging, and interpret test results. However, questions about its safety and functionality remain, particularly in regions like Australia, where access is currently limited to a waitlist.
The Rise of AI in Health Advice
Data from 2024 indicates that 46% of Australians have recently used an AI tool, with health queries being particularly popular. According to OpenAI, one in four regular ChatGPT users worldwide submit a health-related prompt each week. A study conducted in 2024 found that nearly one in ten Australians had asked ChatGPT a health query in the preceding six months, with usage more common among groups facing challenges in accessing health information, such as non-English speakers and those with limited health literacy.
Accuracy and Safety Concerns
Despite the growing reliance on AI for health advice, independent research consistently highlights concerns about the accuracy and safety of the information provided. There have been high-profile instances where AI tools, including ChatGPT, have dispensed unsafe health advice. For example, ChatGPT allegedly encouraged suicidal thoughts in one case. Furthermore, Google recently removed several AI-generated health summaries after a Guardian investigation revealed dangerous and misleading advice regarding blood test results.
Independent research shows that generative AI tools sometimes provide unsafe health advice, even when they have access to medical records.
What Sets ChatGPT Health Apart?
ChatGPT Health introduces several features aimed at personalizing its responses. Users can connect their accounts with medical records and smartphone apps like MyFitnessPal, allowing the tool to incorporate personal data about diagnoses, blood tests, and other health metrics. OpenAI assures that information does not flow back to general ChatGPT, maintaining stronger security and privacy for ChatGPT Health conversations.
OpenAI has collaborated with over 260 clinicians in 60 countries, including Australia, to refine the quality of ChatGPT Health outputs. In theory, this collaboration could lead to more personalized and private responses compared to general ChatGPT.
Ongoing Risks and Limitations
Despite these advancements, OpenAI acknowledges that ChatGPT Health is not designed to replace medical care and is not intended for diagnosis or treatment. The tool’s accuracy and safety remain largely untested, and it is unclear whether it would be classified and regulated as a medical device in Australia. Additionally, its responses may not align with Australian clinical guidelines or meet the needs of priority populations, such as First Nations people and those from culturally diverse backgrounds.
OpenAI states that ChatGPT Health is not intended to replace medical care and is not designed for diagnosis or treatment.
Guidelines for Using AI in Health Queries
In light of these concerns, it is crucial to approach AI-generated health advice with caution. Higher-risk health questions, such as interpreting symptoms or test results, should always be directed to a healthcare professional. For lower-risk queries, such as understanding medical terms or learning about health conditions, AI can be a useful supplementary resource.
In Australia, individuals can access free advice through a 24/7 national phone service, 1800 MEDICARE, where registered nurses provide guidance on symptoms. Additionally, the Symptom Checker operated by healthdirect offers evidence-based advice and connects users with local services.
The Future of AI in Health
As AI tools become increasingly prevalent, there is a need for clear, reliable, and independent information about their capabilities and limitations. Purpose-built AI health tools have the potential to transform how people manage their health, but they must be designed with accuracy, equity, and transparency in mind. Equipping diverse communities with the knowledge and skills to navigate this new technology safely is essential.
Julie Ayre, Post Doctoral Research Fellow, Sydney Health Literacy Lab, University of Sydney; Adam Dunn, Professor of Biomedical Informatics, University of Sydney, and Kirsten McCaffery, NHMRC Principal Research Fellow, Sydney School of Health, University of Sydney, emphasize the importance of community and clinician involvement in developing these tools to ensure they meet the needs of all users.