ChatGPT Health Missed Half of Medical Emergencies in New Study
Researchers testing OpenAI’s health-focused chatbot found it frequently underestimated serious medical situations. In more than half of emergency cases, the AI suggested waiting instead of seeking immediate care.
A new study published in Nature Medicine tested how well ChatGPT Health could evaluate medical scenarios and determine whether patients needed urgent care. Researchers ran the chatbot through 60 real-world cases and compared its responses with those from physicians. The results were mixed at best. In situations doctors classified as emergencies, the chatbot recommended delaying care in more than half of the cases. While AI tools can help answer health questions, researchers say the technology still has major limitations when real clinical judgment is required. Key Points
My OpinionAI answering health questions makes sense — people want instant answers, especially after hours. But medicine isn’t multiple choice. Context, judgment, and experience matter. Tools like this may eventually become useful assistants, but trusting them alone with serious health decisions right now is probably a bad bet. Closing TakeawayAI is quickly becoming part of everyday healthcare conversations, and millions already use chatbots for medical questions. But this study highlights an important reality: passing medical exams isn’t the same as practicing medicine. For now, AI may be a helpful starting point for information — but it shouldn’t replace a real doctor when health is on the line. |
