ChatGPT Health: First independent evaluation raises safety questions

Medical Xpress
February 24, 2026
AI-Generated Deep Dive Summary
The study highlights significant concerns about ChatGPT Health, an AI tool designed to provide health guidance directly to users. Researchers from the Icahn School of Medicine at Mount Sinai found that the tool may fail to direct users appropriately to emergency care in serious cases and raises red flags regarding its suicide-crisis safeguards. These findings, published in *Nature Medicine*, underscore the need for caution when relying on AI for health advice. ChatGPT Health, launched in January 2026, was evaluated independently for the first time by Mount Sinai researchers. The study reveals potential flaws in how the tool assesses urgency and handles mental health crises. This is particularly concerning given the tool's widespread use and direct impact on users' decisions about seeking medical care. The implications of these findings are significant for public health. Many individuals turn to AI tools like ChatGPT Health for quick advice, often during critical moments when timely action is essential. The study emphasizes the importance of rigorous testing and regulation for such tools to ensure they do not cause harm by providing misleading or inadequate guidance. For readers interested in health and technology, this highlights the dual-edged nature of AI in healthcare. While it offers convenience and accessibility, it also raises questions about accuracy and safety. The need for independent evaluations and stricter guidelines becomes apparent to protect users from potential risks associated with AI-driven health advice.
Verticals
healthmedical
Originally published on Medical Xpress on 2/24/2026