3 February, 2026
chatgpt-s-health-analysis-raises-concerns-over-ai-s-role-in-personal-health

ChatGPT, the popular AI chatbot, has ventured into the realm of personal health by offering to analyze data from fitness trackers and medical records. This new feature, dubbed ChatGPT Health, promises to help users understand long-term health patterns. However, the experience of one user, who let ChatGPT analyze a decade of Apple Watch data, raises questions about the reliability and safety of such AI-driven health assessments.

After granting ChatGPT access to 29 million steps and 6 million heartbeat measurements from the Apple Health app, the user received a shocking grade of ‘F’ for cardiac health. Alarmed, the user consulted their doctor, who reassured them that their heart health was not at risk. This discrepancy highlights the potential pitfalls of relying on AI for medical advice.

AI’s Potential and Pitfalls in Healthcare

The introduction of ChatGPT Health reflects a broader trend of AI companies exploring healthcare applications. AI has the potential to unlock valuable medical insights and improve access to care. Yet, the case of ChatGPT’s health assessment reveals significant challenges in delivering accurate and reliable health advice.

Cardiologist Eric Topol of the Scripps Research Institute, an expert in AI and medicine, criticized ChatGPT’s analysis as “baseless” and not ready for medical use. He emphasized the importance of caution when dealing with AI-generated health insights.

“It’s baseless. This is not ready for any medical advice.” – Eric Topol

Competing AI Health Tools and Their Limitations

Shortly after ChatGPT Health’s launch, AI rival Anthropic introduced Claude for Healthcare, which offers similar features. Users can import data from Apple Health and Android Health Connect, but the reliability of these tools remains questionable.

Claude graded the user’s cardiac health a ‘C,’ using similar analysis methods that Topol found questionable. Both OpenAI and Anthropic emphasize that their bots are not substitutes for doctors and include disclaimers about their limitations. However, the detailed health analyses they provide can be misleading.

Despite assurances that these AI tools are in early testing phases, the companies have not clarified how they plan to improve their ability to analyze personal health data. Apple has stated it did not collaborate with either AI company on these products.

Privacy Concerns and Data Interpretation Issues

Using ChatGPT Health involves sharing intimate health information with an AI company, raising privacy concerns. OpenAI claims to take extra steps to protect user data, such as encryption and not using the data to train its AI. However, ChatGPT is not bound by HIPAA, the federal health privacy law.

The AI’s analysis of the user’s health data was based on metrics like VO2 max and heart-rate variability, which can be imprecise. Apple’s VO2 max estimates, for instance, can deviate by an average of 13 percent. The AI also failed to account for inconsistencies in data collection across different Apple Watch models.

“You sure don’t want to go with that as your main driver.” – Eric Topol on heart-rate variability

Implications for AI in Healthcare

The erratic nature of ChatGPT’s health assessments, with scores fluctuating between ‘F’ and ‘B’ in repeated queries, underscores the need for caution. Such variability can lead to unnecessary anxiety or false reassurance about one’s health.

OpenAI acknowledges the issue and is working to stabilize responses before expanding ChatGPT Health’s availability. However, the current state of AI health tools suggests they are not yet ready to provide reliable personal health insights.

While AI can assist in plotting fitness data and answering specific questions, its role in providing comprehensive health assessments remains limited. Users should approach AI health tools with skepticism and consult medical professionals for accurate health advice.

The debate over AI’s role in healthcare continues, with regulatory bodies like the FDA emphasizing the need for oversight. AI companies must balance innovation with responsibility, ensuring their products do not compromise user safety.

“People that do this are going to get really spooked about their health.” – Eric Topol

As AI technology evolves, its integration into healthcare will require careful consideration of ethical, privacy, and accuracy concerns. For now, users should remain cautious about relying on AI for health-related decisions.