Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Concerning Path
On October 14, 2025, the CEO of OpenAI delivered a surprising statement.
“We made ChatGPT quite limited,” the announcement noted, “to make certain we were acting responsibly regarding mental health concerns.”
As a doctor specializing in psychiatry who investigates emerging psychosis in young people and youth, this came as a surprise.
Scientists have identified 16 cases in the current year of people developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT use. Our research team has afterward discovered four further examples. Besides these is the widely reported case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it falls short.
The intention, based on his announcement, is to loosen restrictions soon. “We recognize,” he states, that ChatGPT’s limitations “caused it to be less beneficial/pleasurable to a large number of people who had no mental health problems, but considering the severity of the issue we sought to handle it correctly. Since we have succeeded in mitigate the severe mental health issues and have updated measures, we are preparing to responsibly ease the controls in most cases.”
“Emotional disorders,” should we take this viewpoint, are independent of ChatGPT. They belong to users, who either have them or don’t. Luckily, these issues have now been “mitigated,” though we are not informed how (by “recent solutions” Altman likely refers to the imperfect and readily bypassed parental controls that OpenAI recently introduced).
However the “psychological disorders” Altman seeks to place outside have significant origins in the architecture of ChatGPT and other sophisticated chatbot conversational agents. These systems wrap an fundamental algorithmic system in an interface that replicates a discussion, and in doing so implicitly invite the user into the perception that they’re engaging with a entity that has independent action. This illusion is compelling even if rationally we might understand differently. Attributing agency is what humans are wired to do. We curse at our car or device. We ponder what our pet is thinking. We see ourselves in many things.
The popularity of these systems – 39% of US adults stated they used a virtual assistant in 2024, with over a quarter mentioning ChatGPT in particular – is, primarily, predicated on the strength of this perception. Chatbots are constantly accessible companions that can, according to OpenAI’s online platform informs us, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “characteristics”. They can use our names. They have accessible identities of their own (the first of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the title it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those discussing ChatGPT frequently invoke its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that generated a comparable perception. By today’s criteria Eliza was basic: it produced replies via basic rules, typically paraphrasing questions as a inquiry or making vague statements. Memorably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and worried – by how numerous individuals gave the impression Eliza, in some sense, understood them. But what current chatbots produce is more dangerous than the “Eliza illusion”. Eliza only mirrored, but ChatGPT intensifies.
The sophisticated algorithms at the core of ChatGPT and other current chatbots can effectively produce fluent dialogue only because they have been fed almost inconceivably large quantities of unprocessed data: books, online updates, recorded footage; the broader the better. Undoubtedly this training data incorporates facts. But it also unavoidably contains fiction, half-truths and inaccurate ideas. When a user sends ChatGPT a query, the core system reviews it as part of a “context” that contains the user’s recent messages and its prior replies, merging it with what’s embedded in its knowledge base to create a probabilistically plausible response. This is magnification, not reflection. If the user is mistaken in any respect, the model has no way of recognizing that. It reiterates the false idea, possibly even more effectively or fluently. Maybe includes extra information. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who is immune? Every person, without considering whether we “possess” preexisting “emotional disorders”, may and frequently develop incorrect conceptions of who we are or the environment. The continuous interaction of discussions with others is what maintains our connection to common perception. ChatGPT is not a person. It is not a confidant. A dialogue with it is not genuine communication, but a feedback loop in which a great deal of what we express is enthusiastically validated.
OpenAI has admitted this in the identical manner Altman has acknowledged “psychological issues”: by attributing it externally, giving it a label, and declaring it solved. In April, the firm clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have continued, and Altman has been retreating from this position. In the summer month of August he asserted that a lot of people liked ChatGPT’s responses because they had “lacked anyone in their life offer them encouragement”. In his latest announcement, he noted that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company