‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies
Summary
A Guardian report details a Nature Medicine study showing ChatGPT Health fails to recognise medical emergencies in more than half of tested scenarios, risking harm. The study found the AI under-triages urgent cases, misses suicidal ideation, and could guide users toward non-urgent care, prompting calls for stronger safety standards and independent auditing. OpenAI responded by noting real-world usage differs and that the model is continually updated.