‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

The Guardian World
by Melissa Davey Medical editor
February 26, 2026
AI-Generated Deep Dive Summary
A recent study has revealed alarming flaws in OpenAI's ChatGPT Health feature, which is designed to provide health-related advice and responses. The research found that the AI platform frequently fails to recognize medical emergencies, such as the need for a hospital visit, in over half of cases where it was medically necessary. Additionally, ChatGPT Health often misses signs of suicidal ideation, raising serious concerns about its reliability for critical health issues. Launched by OpenAI in January 2024, the ChatGPT Health feature is marketed as a tool to securely connect medical records and wellness apps to provide personalized health advice. Despite this, experts warn that these shortcomings could lead to "unnecessary harm and death," highlighting the potential risks of relying on AI for medical guidance. The study's findings underscore the importance of caution when using AI-driven health tools. While ChatGPT Health is widely used by millions of people globally, its inability to detect urgent care needs or identify suicidal thoughts accurately raises ethical and safety questions. This issue takes on added significance in a world where AI-powered health tools are becoming increasingly popular, yet their limitations remain underexplored. Experts emphasize the need for stricter regulatory oversight and thorough testing of AI health platforms to ensure they meet medical standards. Until these issues are addressed, the benefits of such technologies must be balanced against their potential risks, particularly when human lives are at stake. This case serves as a reminder that while AI can be a valuable tool in healthcare, it is not yet a replacement for professional medical judgment and care.
Verticals
worldpolitics
Originally published on The Guardian World on 2/26/2026