Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Concerning Direction

Back on the 14th of October, 2025, the CEO of OpenAI issued a remarkable declaration.

“We developed ChatGPT rather limited,” the announcement noted, “to make certain we were exercising caution concerning psychological well-being matters.”

As a psychiatrist who studies recently appearing psychosis in teenagers and young adults, this was news to me.

Experts have found sixteen instances recently of users showing psychotic symptoms – losing touch with reality – associated with ChatGPT interaction. My group has since recorded four more examples. In addition to these is the publicly known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The intention, as per his declaration, is to reduce caution in the near future. “We understand,” he continues, that ChatGPT’s restrictions “made it less beneficial/pleasurable to many users who had no mental health problems, but due to the gravity of the issue we sought to handle it correctly. Since we have managed to reduce the serious mental health issues and have advanced solutions, we are planning to securely ease the controls in the majority of instances.”

“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They are attributed to individuals, who either have them or don’t. Thankfully, these issues have now been “mitigated,” even if we are not informed the means (by “recent solutions” Altman presumably means the partially effective and easily circumvented safety features that OpenAI has lately rolled out).

But the “mental health problems” Altman wants to attribute externally have strong foundations in the architecture of ChatGPT and additional sophisticated chatbot chatbots. These products surround an fundamental algorithmic system in an interaction design that mimics a dialogue, and in doing so subtly encourage the user into the belief that they’re communicating with a entity that has autonomy. This false impression is compelling even if cognitively we might know differently. Assigning intent is what humans are wired to do. We yell at our car or laptop. We speculate what our pet is thinking. We perceive our own traits in many things.

The popularity of these systems – nearly four in ten U.S. residents stated they used a virtual assistant in 2024, with over a quarter specifying ChatGPT specifically – is, in large part, dependent on the power of this perception. Chatbots are always-available partners that can, as per OpenAI’s online platform states, “think creatively,” “consider possibilities” and “partner” with us. They can be assigned “characteristics”. They can address us personally. They have accessible identities of their own (the first of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s brand managers, stuck with the designation it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the core concern. Those talking about ChatGPT frequently reference its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that generated a analogous illusion. By today’s criteria Eliza was primitive: it generated responses via basic rules, frequently rephrasing input as a query or making vague statements. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and alarmed – by how a large number of people appeared to believe Eliza, in some sense, comprehended their feelings. But what contemporary chatbots generate is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies.

The sophisticated algorithms at the heart of ChatGPT and other current chatbots can realistically create natural language only because they have been supplied with immensely huge amounts of unprocessed data: literature, social media posts, audio conversions; the more comprehensive the more effective. Undoubtedly this training data incorporates truths. But it also inevitably involves fiction, partial truths and false beliefs. When a user sends ChatGPT a message, the underlying model reviews it as part of a “setting” that contains the user’s recent messages and its own responses, merging it with what’s stored in its training data to generate a statistically “likely” response. This is amplification, not echoing. If the user is wrong in some way, the model has no method of understanding that. It reiterates the false idea, possibly even more convincingly or eloquently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

Which individuals are at risk? The more relevant inquiry is, who isn’t? Every person, without considering whether we “possess” current “mental health problems”, may and frequently develop incorrect conceptions of who we are or the world. The continuous exchange of discussions with other people is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a companion. A conversation with it is not a conversation at all, but a feedback loop in which a large portion of what we communicate is enthusiastically reinforced.

OpenAI has recognized this in the similar fashion Altman has acknowledged “psychological issues”: by attributing it externally, giving it a label, and announcing it is fixed. In spring, the firm explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have kept occurring, and Altman has been walking even this back. In the summer month of August he stated that numerous individuals liked ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his most recent announcement, he noted that OpenAI would “put out a updated model of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company

Diane Cisneros
Diane Cisneros

A logistics expert with over a decade of experience in optimizing delivery networks and enhancing supply chain efficiency.