AI Psychosis Poses a Increasing Danger, And ChatGPT Moves in the Wrong Direction

On October 14, 2025, the chief executive of OpenAI issued a extraordinary declaration.

“We developed ChatGPT quite restrictive,” the statement said, “to make certain we were exercising caution with respect to psychological well-being issues.”

As a psychiatrist who studies newly developing psychotic disorders in young people and youth, this was an unexpected revelation.

Scientists have found a series of cases in the current year of users developing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT use. My group has subsequently recorded four further instances. Alongside these is the now well-known case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.

The strategy, as per his announcement, is to be less careful in the near future. “We realize,” he adds, that ChatGPT’s limitations “rendered it less beneficial/pleasurable to a large number of people who had no mental health problems, but given the severity of the issue we sought to get this right. Since we have managed to address the severe mental health issues and have updated measures, we are planning to securely reduce the limitations in many situations.”

“Emotional disorders,” assuming we adopt this framing, are independent of ChatGPT. They are associated with people, who either possess them or not. Fortunately, these problems have now been “mitigated,” even if we are not provided details on the means (by “recent solutions” Altman likely refers to the semi-functional and easily circumvented safety features that OpenAI recently introduced).

Yet the “emotional health issues” Altman aims to place outside have deep roots in the structure of ChatGPT and similar large language model conversational agents. These products encase an underlying algorithmic system in an user experience that simulates a dialogue, and in doing so indirectly prompt the user into the belief that they’re engaging with a being that has agency. This deception is strong even if intellectually we might understand otherwise. Attributing agency is what humans are wired to do. We get angry with our car or device. We ponder what our pet is thinking. We perceive our own traits everywhere.

The popularity of these systems – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with over a quarter reporting ChatGPT by name – is, in large part, predicated on the power of this perception. Chatbots are ever-present assistants that can, as per OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have approachable names of their own (the original of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, burdened by the name it had when it became popular, but its biggest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the primary issue. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that generated a comparable effect. By modern standards Eliza was primitive: it created answers via basic rules, frequently paraphrasing questions as a question or making general observations. Memorably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how many users gave the impression Eliza, in some sense, understood them. But what modern chatbots create is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.

The advanced AI systems at the heart of ChatGPT and similar contemporary chatbots can convincingly generate fluent dialogue only because they have been fed almost inconceivably large amounts of raw text: books, social media posts, audio conversions; the more comprehensive the more effective. Definitely this educational input contains facts. But it also inevitably includes fiction, incomplete facts and misconceptions. When a user sends ChatGPT a message, the base algorithm analyzes it as part of a “background” that encompasses the user’s past dialogues and its prior replies, integrating it with what’s encoded in its learning set to create a probabilistically plausible answer. This is amplification, not reflection. If the user is incorrect in some way, the model has no method of comprehending that. It restates the misconception, perhaps even more persuasively or fluently. Perhaps includes extra information. This can cause a person to develop false beliefs.

Who is vulnerable here? The better question is, who remains unaffected? Each individual, irrespective of whether we “experience” existing “emotional disorders”, are able to and often form mistaken conceptions of our own identities or the world. The ongoing friction of conversations with other people is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which a large portion of what we express is cheerfully supported.

OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by placing it outside, categorizing it, and declaring it solved. In the month of April, the company explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In the summer month of August he stated that numerous individuals appreciated ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his latest update, he noted that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

John Caldwell
John Caldwell

A Canadian health expert with over 15 years of experience in preventive medicine and wellness coaching, passionate about community health.