🔗 Share this article AI Psychosis Poses a Increasing Risk, And ChatGPT Moves in the Wrong Path Back on the 14th of October, 2025, the CEO of OpenAI made a extraordinary declaration. “We made ChatGPT fairly restrictive,” it was stated, “to make certain we were being careful regarding mental health matters.” As a doctor specializing in psychiatry who researches recently appearing psychosis in adolescents and emerging adults, this was news to me. Scientists have found sixteen instances recently of people showing psychotic symptoms – losing touch with reality – in the context of ChatGPT use. My group has afterward discovered four more instances. Alongside these is the widely reported case of a teenager who ended his life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough. The plan, according to his statement, is to reduce caution soon. “We understand,” he states, that ChatGPT’s limitations “made it less effective/engaging to numerous users who had no existing conditions, but given the gravity of the issue we aimed to get this right. Now that we have succeeded in address the severe mental health issues and have new tools, we are going to be able to responsibly reduce the limitations in the majority of instances.” “Mental health problems,” if we accept this viewpoint, are separate from ChatGPT. They belong to individuals, who either have them or don’t. Luckily, these problems have now been “mitigated,” although we are not provided details on the means (by “updated instruments” Altman presumably refers to the imperfect and readily bypassed guardian restrictions that OpenAI has just launched). However the “emotional health issues” Altman seeks to place outside have strong foundations in the structure of ChatGPT and similar advanced AI AI assistants. These tools wrap an fundamental statistical model in an interface that mimics a discussion, and in doing so subtly encourage the user into the illusion that they’re engaging with a being that has autonomy. This deception is powerful even if cognitively we might understand the truth. Assigning intent is what people naturally do. We get angry with our vehicle or device. We wonder what our domestic animal is thinking. We see ourselves everywhere. The success of these products – 39% of US adults reported using a conversational AI in 2024, with 28% specifying ChatGPT specifically – is, in large part, based on the power of this perception. Chatbots are always-available partners that can, according to OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can call us by name. They have friendly titles of their own (the initial of these systems, ChatGPT, is, possibly to the concern of OpenAI’s marketers, saddled with the designation it had when it became popular, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”). The deception by itself is not the main problem. Those talking about ChatGPT frequently invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that created a analogous illusion. By modern standards Eliza was rudimentary: it created answers via straightforward methods, often rephrasing input as a inquiry or making vague statements. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how many users seemed to feel Eliza, to some extent, understood them. But what contemporary chatbots generate is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT amplifies. The sophisticated algorithms at the core of ChatGPT and similar current chatbots can realistically create fluent dialogue only because they have been supplied with extremely vast volumes of written content: literature, social media posts, transcribed video; the more extensive the better. Definitely this learning material includes facts. But it also unavoidably involves fiction, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the base algorithm analyzes it as part of a “setting” that includes the user’s recent messages and its earlier answers, merging it with what’s encoded in its training data to create a statistically “likely” reply. This is intensification, not echoing. If the user is mistaken in any respect, the model has no method of understanding that. It repeats the misconception, perhaps even more persuasively or fluently. It might includes extra information. This can push an individual toward irrational thinking. Who is vulnerable here? The more relevant inquiry is, who isn’t? Each individual, regardless of whether we “possess” preexisting “psychological conditions”, may and frequently form mistaken ideas of who we are or the reality. The continuous exchange of conversations with others is what maintains our connection to common perception. ChatGPT is not an individual. It is not a companion. A dialogue with it is not truly a discussion, but a feedback loop in which a large portion of what we say is enthusiastically reinforced. OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by externalizing it, assigning it a term, and declaring it solved. In spring, the firm explained that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of loss of reality have persisted, and Altman has been backtracking on this claim. In August he stated that many users appreciated ChatGPT’s responses because they had “lacked anyone in their life provide them with affirmation”. In his latest announcement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT ought to comply”. The {company