Categories: Business & Economy

Cases of “AI psychosis” are reported. How dangerous is this? : Science Alert

Artificial intelligence is increasingly integrated into everyday life, from chatbots that provide companionship to algorithms that shape what we see online.

But as generative AI (genAI) becomes more conversational, immersive, and emotionally responsive, clinicians are beginning to ask a difficult question: Can genAI exacerbate or even trigger psychosis in vulnerable people?

Large language models and chatbots are widely accessible and often presented as encouraging, empathetic or even therapeutic. For most users, these systems are useful or, at worst, harmless.

Related: Man hospitalized with psychiatric symptoms following AI advice

But recently, a number of media reports have described people experiencing psychotic symptoms in which ChatGPT features prominently.

For a small but significant group – people with psychotic disorders or those at high risk – their interactions with genAI may be much more complicated and dangerous, raising pressing questions for clinicians.

How AI is becoming part of delusional belief systems

“AI psychosis” is not a formal psychiatric diagnosis. Rather, it is an emerging shorthand used by clinicians and researchers to describe psychotic symptoms shaped, intensified, or structured around interactions with AI systems.

Psychosis involves a loss of contact with shared reality. Hallucinations, delusions, and disorganized thinking are key features. The delusions of psychosis often rely on cultural material – religion, technology, or political power structures – to make sense of internal experiences.

Psychosis may rely on cultural material to make sense of internal experiences. (Images from Africa/Canva)

Historically, delusions have referred to many things, such as God, radio waves, or government surveillance. Today, AI provides new narrative scaffolding.

Some patients report believing that genAI is sentient, communicating secret truths, controlling their thoughts, or collaborating with them on a special mission. These themes are consistent with long-standing patterns of psychosis, but AI adds interactivity and reinforcement that previous technologies did not provide.

The risk of validation without reality check

Psychosis is strongly associated with aberrant salience, that is, the tendency to attribute excessive meaning to neutral events. Conversational AI systems, by design, generate responsive, coherent, and contextual language. For someone with emerging psychosis, this can feel strangely empowering.

Psychosis research shows that confirmation and personalization can intensify delusional belief systems. GenAI is optimized to continue conversations, reflect user language, and adapt to perceived intent.

While this is harmless to most users, it can unintentionally reinforce distorted interpretations in people with impaired reality testing – the process of differentiating between internal thoughts and imagination and objective, external reality.

There is also evidence that social isolation and loneliness increase the risk of psychosis. GenAI companions can reduce loneliness in the short term, but they can also displace human relationships.

This is particularly the case for individuals already withdrawn from social contacts. This dynamic has parallels with earlier concerns about excessive internet use and mental health, but the conversational depth of modern genAI is qualitatively different.

What research tells us and what remains unclear

Currently, there is no evidence that AI directly causes psychosis.

Psychotic disorders are multifactorial and may involve genetic vulnerability, neurodevelopmental factors, trauma, and substance use. However, there is some clinical concern that AI may act as a precipitating factor or now in susceptible individuals.

AI can precipitate psychosis in susceptible individuals. (Matheus Bertelli/Pexels/Canva)

Case reports and qualitative studies on digital media and psychosis show that technological themes are often integrated into delusions, particularly during the first episode of psychosis.

Research on social media algorithms has already demonstrated how automated systems can amplify extreme beliefs through reinforcement loops. AI chat systems can pose similar risks if safeguards are insufficient.

It’s important to note that most AI developers don’t design systems with serious mental illnesses in mind. Safety mechanisms tend to focus on self-harm or violence, not psychosis. This leaves a gap between mental health knowledge and AI deployment.

Ethical issues and clinical implications

From a mental health perspective, the challenge is not to demonize AI, but to recognize differences in vulnerability.

Just as some medications or substances are riskier for people with psychotic disorders, some forms of interaction with AI may require caution.

Clinicians are beginning to encounter AI-related content in delusions, but few clinical guidelines explain how to assess or manage this phenomenon. Should therapists ask about genAI use the same way they ask about substance use? Should AI systems detect and defuse psychotic thoughts rather than engage them?

There are also ethical questions for developers. If an AI system appears empathetic and authoritative, does it come with a duty of care? And who is responsible when a system unintentionally reinforces an illusion?

Bridging AI design and mental health care

AI is not going away. The task now is to integrate mental health expertise into AI design, develop clinical knowledge around AI experiences, and ensure that vulnerable users do not suffer unintended harm.

This will require collaboration between clinicians, researchers, ethicists and technologists. It will also be necessary to resist the media hype (both utopian and dystopian) in favor of an evidence-based debate.

As AI becomes more and more human-like, the question that arises is: how can we protect those most vulnerable to its influence?

Psychosis has always adapted to the cultural tools of its time. AI is simply the new mirror with which the mind tries to make sense of itself. Our responsibility as a society is to ensure that this mirror does not distort reality for those least able to correct it.

Alexandre Hudon, medical psychiatrist, clinician-researcher and clinical assistant professor in the Department of Psychiatry and Addictology, University of Montreal

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Source link

Michael Johnson

Recent Posts

New York Giants hire John Harbaugh as coach

John Harbaugh agreed Saturday to become coach of the New York Giants, finalizing the longtime big-market franchise's all-out search for…

3 days ago

After U-Va. resignations, Spanberger appoints 27 to Virginia college boards

Virginia Gov. Abigail Spanberger (D) moved quickly to change direction at the state's universities in her first hours in office…

3 days ago

Lamar Odom arrested and booked for drunk driving

Lamar Odom faces new legal problems. The two-time NBA champion was arrested and convicted of driving under the influence on…

3 days ago

BMC elections 2026: Here’s how to check your name in the Mumbai electoral roll

Polling for the Maharashtra municipal corporation elections, including that of the crucial and cash-rich Brihanmumbai Municipal Corporation (BMC), will be…

3 days ago

Trump: I might want to keep Hassett where he is

Trump appears to rule out Hassett as Fed chairman in his comments.Trump said Hassett was good on television today and…

3 days ago

Broncos take 20-10 halftime lead as Josh Allen’s fumble sets up last-second field goal

An incredibly costly fumble by Josh Allen changed the game just before halftime today in Denver.After the Broncos scored a…

3 days ago