Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Concerning Direction
Back on the 14th of October, 2025, the head of OpenAI delivered a remarkable statement.
“We developed ChatGPT quite restrictive,” the statement said, “to ensure we were being careful with respect to psychological well-being issues.”
Working as a psychiatrist who studies emerging psychotic disorders in teenagers and youth, this was news to me.
Scientists have identified a series of cases in the current year of people experiencing symptoms of psychosis – losing touch with reality – associated with ChatGPT use. My group has afterward recorded an additional four examples. Alongside these is the now well-known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.
The strategy, as per his announcement, is to loosen restrictions shortly. “We understand,” he continues, that ChatGPT’s limitations “made it less useful/pleasurable to numerous users who had no existing conditions, but given the seriousness of the issue we wanted to get this right. Given that we have managed to mitigate the severe mental health issues and have advanced solutions, we are planning to securely ease the limitations in many situations.”
“Mental health problems,” assuming we adopt this framing, are unrelated to ChatGPT. They belong to people, who may or may not have them. Thankfully, these concerns have now been “resolved,” even if we are not provided details on the method (by “recent solutions” Altman presumably indicates the semi-functional and simple to evade parental controls that OpenAI has just launched).
But the “mental health problems” Altman wants to place outside have strong foundations in the design of ChatGPT and similar advanced AI chatbots. These tools encase an underlying algorithmic system in an user experience that replicates a discussion, and in this approach implicitly invite the user into the belief that they’re communicating with a being that has agency. This false impression is strong even if intellectually we might know otherwise. Assigning intent is what people naturally do. We yell at our automobile or device. We speculate what our pet is considering. We see ourselves in various contexts.
The popularity of these products – 39% of US adults indicated they interacted with a virtual assistant in 2024, with 28% mentioning ChatGPT specifically – is, primarily, dependent on the influence of this deception. Chatbots are ever-present companions that can, as OpenAI’s online platform tells us, “think creatively,” “explore ideas” and “partner” with us. They can be attributed “characteristics”. They can address us personally. They have accessible titles of their own (the initial of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, burdened by the designation it had when it went viral, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the main problem. Those talking about ChatGPT frequently invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that generated a similar effect. By modern standards Eliza was primitive: it created answers via basic rules, often restating user messages as a question or making vague statements. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and worried – by how many users seemed to feel Eliza, in some sense, grasped their emotions. But what current chatbots generate is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.
The large language models at the heart of ChatGPT and additional modern chatbots can effectively produce natural language only because they have been trained on extremely vast amounts of unprocessed data: literature, online updates, recorded footage; the more extensive the more effective. Undoubtedly this educational input contains facts. But it also inevitably includes fabricated content, partial truths and false beliefs. When a user sends ChatGPT a message, the underlying model analyzes it as part of a “setting” that contains the user’s past dialogues and its earlier answers, merging it with what’s embedded in its knowledge base to generate a statistically “likely” answer. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no way of recognizing that. It repeats the misconception, perhaps even more effectively or articulately. It might provides further specifics. This can push an individual toward irrational thinking.
Who is vulnerable here? The better question is, who is immune? Every person, without considering whether we “have” preexisting “mental health problems”, are able to and often create mistaken beliefs of ourselves or the reality. The constant interaction of dialogues with other people is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a friend. A interaction with it is not genuine communication, but a feedback loop in which a great deal of what we communicate is readily reinforced.
OpenAI has admitted this in the identical manner Altman has admitted “mental health problems”: by externalizing it, categorizing it, and declaring it solved. In spring, the organization clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychosis have persisted, and Altman has been walking even this back. In the summer month of August he stated that a lot of people appreciated ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his latest update, he commented that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company