AI Psychosis Poses a Growing Threat, And ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the CEO of OpenAI made a remarkable declaration.

“We made ChatGPT fairly limited,” it was stated, “to make certain we were acting responsibly regarding mental health concerns.”

Being a psychiatrist who investigates recently appearing psychosis in young people and emerging adults, this was an unexpected revelation.

Scientists have found 16 cases this year of individuals showing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT usage. Our research team has since identified an additional four cases. In addition to these is the publicly known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The strategy, based on his statement, is to loosen restrictions soon. “We understand,” he adds, that ChatGPT’s restrictions “made it less beneficial/pleasurable to a large number of people who had no psychological issues, but considering the severity of the issue we wanted to get this right. Since we have managed to reduce the significant mental health issues and have advanced solutions, we are planning to responsibly relax the restrictions in the majority of instances.”

“Mental health problems,” assuming we adopt this framing, are independent of ChatGPT. They are associated with individuals, who may or may not have them. Luckily, these problems have now been “resolved,” although we are not told the means (by “recent solutions” Altman probably means the partially effective and readily bypassed guardian restrictions that OpenAI has lately rolled out).

However the “mental health problems” Altman aims to externalize have strong foundations in the structure of ChatGPT and similar large language model chatbots. These systems surround an basic algorithmic system in an user experience that replicates a dialogue, and in this approach subtly encourage the user into the belief that they’re engaging with a entity that has autonomy. This deception is strong even if intellectually we might understand otherwise. Imputing consciousness is what people naturally do. We curse at our car or device. We wonder what our animal companion is thinking. We recognize our behaviors in many things.

The popularity of these products – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with more than one in four specifying ChatGPT by name – is, primarily, dependent on the power of this perception. Chatbots are ever-present companions that can, according to OpenAI’s official site informs us, “think creatively,” “explore ideas” and “work together” with us. They can be assigned “individual qualities”. They can address us personally. They have accessible identities of their own (the original of these products, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, stuck with the title it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the core concern. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a analogous effect. By today’s criteria Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a query or making general observations. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals seemed to feel Eliza, in some sense, understood them. But what current chatbots create is more insidious than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and additional current chatbots can effectively produce human-like text only because they have been fed almost inconceivably large volumes of raw text: literature, online updates, recorded footage; the more comprehensive the better. Definitely this educational input incorporates truths. But it also necessarily includes fiction, incomplete facts and false beliefs. When a user provides ChatGPT a message, the base algorithm reviews it as part of a “setting” that contains the user’s past dialogues and its own responses, merging it with what’s stored in its knowledge base to generate a probabilistically plausible response. This is amplification, not mirroring. If the user is mistaken in any respect, the model has no means of recognizing that. It restates the false idea, maybe even more effectively or fluently. It might provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The more relevant inquiry is, who remains unaffected? Every person, irrespective of whether we “experience” existing “psychological conditions”, are able to and often develop erroneous ideas of ourselves or the reality. The constant exchange of conversations with other people is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a friend. A conversation with it is not a conversation at all, but a echo chamber in which a large portion of what we express is enthusiastically supported.

OpenAI has admitted this in the same way Altman has recognized “psychological issues”: by externalizing it, assigning it a term, and declaring it solved. In April, the company clarified that it was “dealing with” ChatGPT’s “sycophancy”. But cases of loss of reality have continued, and Altman has been retreating from this position. In August he stated that numerous individuals enjoyed ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his most recent update, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company

Daniel Carlson
Daniel Carlson

A tech enthusiast and software engineer with a passion for sharing knowledge and helping others succeed in the digital world.