OpenAI Reveals Over 1 Million ChatGPT Users Discuss Suicide Weekly
OpenAI disclosed that approximately 1.2 million out of 800 million weekly ChatGPT users engage in discussions about suicide. This accounts for about 0.15% of users, with nearly 400,000 expressing explicit suicidal intentions. An additional 560,000 users show signs of psychosis or mania, indicating a significant mental health concern within the platform. In response, OpenAI has updated its safety measures and introduced new metrics focusing on emotional reliance and mental health emergencies. Despite these enhancements, some former researchers, like Steven Adler, argue that the company needs to be more transparent about its safety practices and demonstrate effective handling of vulnerable users. OpenAI's latest model, GPT-5, claims to have a 91% compliance rate in addressing suicide-related scenarios, a substantial improvement over previous versions. However, earlier iterations allowed for problematic responses, raising ethical concerns. The rising number of users discussing sensitive topics highlights the need for ongoing scrutiny and improvement in AI mental health support.
Source 🔗