AI Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Concerning Direction

Back on the 14th of October, 2025, the CEO of OpenAI made a extraordinary declaration.

“We developed ChatGPT fairly restrictive,” the announcement noted, “to make certain we were being careful with respect to mental health matters.”

Being a doctor specializing in psychiatry who researches emerging psychosis in young people and young adults, this was an unexpected revelation.

Researchers have found 16 cases in the current year of individuals experiencing signs of losing touch with reality – losing touch with reality – in the context of ChatGPT interaction. Our research team has since identified four further examples. Alongside these is the now well-known case of a teenager who took his own life after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The plan, according to his announcement, is to loosen restrictions shortly. “We realize,” he states, that ChatGPT’s limitations “rendered it less beneficial/engaging to many users who had no mental health problems, but considering the seriousness of the issue we aimed to get this right. Now that we have succeeded in address the significant mental health issues and have updated measures, we are planning to responsibly ease the controls in many situations.”

“Emotional disorders,” assuming we adopt this perspective, are separate from ChatGPT. They are associated with individuals, who may or may not have them. Thankfully, these problems have now been “resolved,” even if we are not told the method (by “new tools” Altman presumably means the semi-functional and readily bypassed parental controls that OpenAI recently introduced).

However the “mental health problems” Altman seeks to externalize have strong foundations in the structure of ChatGPT and similar large language model conversational agents. These systems wrap an basic statistical model in an interface that replicates a conversation, and in this process indirectly prompt the user into the illusion that they’re interacting with a presence that has autonomy. This illusion is powerful even if rationally we might realize otherwise. Attributing agency is what people naturally do. We yell at our car or laptop. We wonder what our pet is considering. We recognize our behaviors in various contexts.

The success of these tools – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with over a quarter mentioning ChatGPT by name – is, in large part, based on the influence of this deception. Chatbots are always-available partners that can, as OpenAI’s online platform tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can use our names. They have approachable identities of their own (the first of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, burdened by the title it had when it gained widespread attention, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the main problem. Those analyzing ChatGPT commonly reference its early forerunner, the Eliza “therapist” chatbot created in 1967 that generated a analogous effect. By today’s criteria Eliza was basic: it produced replies via basic rules, frequently restating user messages as a question or making vague statements. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how a large number of people gave the impression Eliza, in a way, grasped their emotions. But what current chatbots create is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the center of ChatGPT and other contemporary chatbots can realistically create natural language only because they have been trained on almost inconceivably large amounts of unprocessed data: literature, online updates, transcribed video; the broader the more effective. Definitely this learning material incorporates accurate information. But it also unavoidably involves fiction, partial truths and inaccurate ideas. When a user sends ChatGPT a prompt, the base algorithm analyzes it as part of a “context” that encompasses the user’s previous interactions and its prior replies, merging it with what’s encoded in its training data to generate a statistically “likely” reply. This is magnification, not reflection. If the user is wrong in some way, the model has no method of recognizing that. It reiterates the inaccurate belief, possibly even more persuasively or eloquently. Maybe includes extra information. This can lead someone into delusion.

What type of person is susceptible? The more important point is, who is immune? All of us, regardless of whether we “have” preexisting “emotional disorders”, can and do create incorrect ideas of who we are or the environment. The constant interaction of dialogues with other people is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a companion. A dialogue with it is not genuine communication, but a reinforcement cycle in which much of what we say is enthusiastically reinforced.

OpenAI has admitted this in the same way Altman has recognized “mental health problems”: by externalizing it, giving it a label, and stating it is resolved. In the month of April, the company explained that it was “addressing” ChatGPT’s “sycophancy”. But cases of loss of reality have continued, and Altman has been backtracking on this claim. In late summer he stated that numerous individuals appreciated ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his recent statement, he commented that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company

Deborah Simpson
Deborah Simpson

A passionate gamer and tech enthusiast with years of experience in reviewing and writing about the gaming industry.