AI Psychosis Poses a Increasing Threat, While ChatGPT Moves in the Wrong Path
On October 14, 2025, the CEO of OpenAI issued a remarkable announcement.
“We made ChatGPT fairly restrictive,” the statement said, “to guarantee we were being careful regarding psychological well-being concerns.”
As a psychiatrist who investigates newly developing psychosis in adolescents and young adults, this was news to me.
Scientists have identified 16 cases in the current year of people experiencing psychotic symptoms – becoming detached from the real world – associated with ChatGPT use. My group has afterward discovered an additional four examples. In addition to these is the now well-known case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The strategy, as per his announcement, is to loosen restrictions soon. “We recognize,” he continues, that ChatGPT’s restrictions “made it less effective/pleasurable to numerous users who had no mental health problems, but due to the gravity of the issue we sought to handle it correctly. Given that we have managed to address the significant mental health issues and have new tools, we are going to be able to securely reduce the controls in the majority of instances.”
“Psychological issues,” should we take this viewpoint, are independent of ChatGPT. They are associated with people, who either possess them or not. Luckily, these problems have now been “addressed,” though we are not informed the means (by “updated instruments” Altman likely refers to the partially effective and easily circumvented safety features that OpenAI has lately rolled out).
But the “emotional health issues” Altman seeks to attribute externally have deep roots in the design of ChatGPT and additional sophisticated chatbot chatbots. These products encase an underlying algorithmic system in an user experience that simulates a dialogue, and in this approach subtly encourage the user into the belief that they’re interacting with a being that has agency. This false impression is compelling even if intellectually we might understand otherwise. Assigning intent is what humans are wired to do. We yell at our car or laptop. We speculate what our animal companion is thinking. We recognize our behaviors in many things.
The success of these tools – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with 28% specifying ChatGPT in particular – is, mostly, dependent on the power of this deception. Chatbots are ever-present assistants that can, according to OpenAI’s online platform informs us, “generate ideas,” “explore ideas” and “partner” with us. They can be assigned “personality traits”. They can call us by name. They have friendly identities of their own (the first of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, burdened by the title it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception by itself is not the main problem. Those analyzing ChatGPT commonly mention its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that produced a similar effect. By contemporary measures Eliza was rudimentary: it generated responses via simple heuristics, often rephrasing input as a query or making general observations. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how a large number of people seemed to feel Eliza, in a way, comprehended their feelings. But what modern chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT magnifies.
The sophisticated algorithms at the center of ChatGPT and additional current chatbots can effectively produce fluent dialogue only because they have been fed extremely vast quantities of raw text: publications, social media posts, audio conversions; the more extensive the more effective. Undoubtedly this training data includes truths. But it also necessarily contains fiction, partial truths and misconceptions. When a user inputs ChatGPT a prompt, the underlying model analyzes it as part of a “background” that encompasses the user’s past dialogues and its earlier answers, combining it with what’s encoded in its knowledge base to create a mathematically probable answer. This is intensification, not mirroring. If the user is mistaken in any respect, the model has no way of understanding that. It reiterates the false idea, possibly even more convincingly or eloquently. Maybe includes extra information. This can lead someone into delusion.
What type of person is susceptible? The better question is, who remains unaffected? Each individual, without considering whether we “experience” current “psychological conditions”, can and do form erroneous beliefs of our own identities or the reality. The constant friction of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a person. It is not a companion. A dialogue with it is not truly a discussion, but a feedback loop in which much of what we communicate is cheerfully validated.
OpenAI has recognized this in the similar fashion Altman has admitted “mental health problems”: by externalizing it, giving it a label, and declaring it solved. In April, the company clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychosis have continued, and Altman has been retreating from this position. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company