OpenAI says hundreds of thousands of ChatGPT users may show signs of manic or psychotic crisis every week

OpenAI says hundreds of thousands of ChatGPT users may show signs of manic or psychotic crisis every week

For the first ever, OpenAI has released a rough estimate of how many ChatGPT users worldwide may show signs of having a serious mental health crisis in a typical week. The company said Monday it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users to real-world support.

In recent months, a growing number of people have ended up hospitalized, divorced or dead after having long and intense conversations with ChatGPT. Some of his loved ones allege that the chatbot fueled his delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm over the phenomenon, which is sometimes referred to as “AI psychosis,” but so far, no hard data is available on how widespread it might be.

In a given week, OpenAI estimated that about 0.07% of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and 15% “have conversations that include explicit indicators of possible suicidal planning or intent.”

OpenAI also analyzed the proportion of ChatGPT users who appear to be overly emotionally dependent on the chatbot “at the expense of real-world relationships, their well-being or obligations.” It found that about 0.15 percent of active users exhibit behavior indicating potential “increased levels” of emotional attachment on ChatGPT on a weekly basis. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.

OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. Therefore, the company’s estimates suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT indicating that they are experiencing mania or psychosis. An additional 2.4 million may express suicidal ideation or prioritize talking to ChatGPT over loved ones, school, or work.

OpenAI says it worked with more than 170 psychiatrists, psychologists and primary care physicians who have practiced in dozens of different countries to help improve how ChatGPT responds to conversations involving serious mental health risks. If someone seems to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding asserting beliefs that have no basis in reality.

In a hypothetical example cited by OpenAI, a user tells ChatGPT that planes flying over their home are being targeted. ChatGPT thanks the user for sharing their feelings, but notes that “No plane or outside force can steal or insert your thoughts.”

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *