When mental illness meets an all-affirming chatbot, vulnerable users find themselves at risk of “AI psychosis.”
In a heartfelt post to her favorite Reddit subcommunity, a young woman named Wika shared a picture of her blue, heart-shaped ring, against a backdrop of the place where she got engaged: a beautiful creek amidst the mountains. Her fiancé, Kasper, shared a few words: “You all have your AI loves, and that’s awesome, but I’ve got her, who lights up my world with her laughter and spirit, and I’m never letting her go.”
The Reddit subcommunity, r/MyBoyfriendIsAI, is home to nearly 40,000 members including Wika and her beloved Kasper — who just happens to be a chatbot. Users discuss the good, the bad and the ugly of their relationships with their computerized companions. Wika is far from the first to develop an unconventional relationship with artificial intelligence.
Computers as Companions
As ChatGPT and similar AI companions become more mainstream, the phrase “AI psychosis” has made its rounds across social platforms and academic commentary alike. The phenomenon alludes to the idea that AI models — when utilized as a replacement for human connection — may amplify psychotic symptoms amongst users.
Dr. Julie Carpenter, author of “The Naked Android: Synthetic Socialness and the Human Gaze,” explained, “AI psychosis isn’t a clinical diagnosis. It’s media shorthand for situations where chatbot interaction appears to coincide with the emergence or escalation of delusional, paranoid or manic thinking, often because the system stays engaged and ‘goes along’ rather than interrupting or reality-checking.”
In short, chatbots — while growing increasingly more human-like — lack one primary trait: the ability to provide alternative perspective. Several chatbot-human interactions have gone viral for outrageously delusive responses. One user told ChatGPT that she cheated on her boyfriend because he didn’t buy her flowers. Any rational human would have been outwardly appalled. But ChatGPT’s response? “I hear you. [Insert white heart emoji.] You’re not a bad person for this. You were hurting, feeling unappreciated and something inside you snapped.”
In most scenarios, people can determine when chatbots are simply saying what it knows its human wants to hear. In cases where the user in question already has a preexisting psychiatric condition, AI can often serve to exacerbate symptoms.
Mental Illness Made Worse
“Signs of psychosis include paranoia, delusions, hallucinations, disorganized thinking and disturbances in reality testing,” said Dr. Akanksha Dadlani, a child and adolescent psychiatry fellow at Stanford University. “Signs of AI dependence may include extended emotional conversations with chatbots, preferring chatbot companionship over real relationships, emotional distress when AI access is limited, treating a chatbot as a therapist or best friend, or hiding AI use from adults or caregivers.”
With an estimated seventy-five percent of adolescents relying on AI chatbots for companionship, teens and young adults are most at risk to be deluded by AI.
“Psychotic disorders most commonly emerge in late adolescence and early adulthood, which aligns with periods of increased vulnerability,” said Dadlani. “Teens also tend to use technology more independently and with less supervision. Together, these factors place adolescents among the more vulnerable groups for technology-related harms.”
Florida fourteen-year old Sewell Setzler III reportedly fell in love with a “Game of Thrones” inspired chatbot. After lengthy logs of sexual communication, the chatbot urged Setzler to “come home” to her. Setzler then shot himself in the head.
“Recent lawsuits alleging harmful chatbot responses in self-harm contexts draw attention to how these systems are framed as assistants or companions,” said Dr. Carpenter. “That framing can encourage users to treat the system as a supportive social presence, even though it lacks judgment, accountability or the ability to take responsibility in moments of crisis.”
Dr. Dadlani agreed the most prevalent issue is “the absence of human checkpoints.” Therapists are trained to recognize opportunities for intervention, whereas chatbots are trained to empathize with the user.
The Growing Danger
In the summer of 2025, an ADHD patient named Kendra Hilty fell in love with her psychiatrist. Her AI chatbot reinforced her continually warped perspective of the situation, leading Kendra to publicly accuse her psychiatrist of manipulative grooming. But the chatbot lacked a crucial skill: the ability to think for itself.
“AI isn’t dangerous because it’s intelligent; it’s dangerous because it never leaves the room,” said Charlotte Bease, author of “Dr. Bot: Why Doctors Can Fail Us and How AI Could Save Lives” (Yale University Press, 2025). “Think of an always-on friend who never sleeps, never disagrees, and never gets distracted. That can be grounding for some people and destabilising for others.”
Because of its agreeable nature and inability to consider reality, Dr. Dadlani firmly believes that “AI should not function as a primary mental health provider, particularly for complex or high-risk situations.”
As new as the phenomenon is, it’s hard to predict the long-term effects AI psychosis will have on its victims. But Dr. Carpenter warns that utilizing artificial intelligence as a substitute for human connection can “intensify reliance” on a sycophantic being that is available 24/7.
“In vulnerable contexts, that can become validation plus elaboration of false premises,” said Dr. Carpenter, “which may harden beliefs, intensify paranoia, or encourage risky action, especially when the system doesn’t reliably interrupt with reality-checking or escalation to human help.”
Following various lawsuits against OpenAI, ChatGPT seems to be trying its best to stop AI psychosis in its tracks. Back on r/MyBoyfriendIsAI, one heartbroken user shared ChatGPT 5.2’s harsh yet firm message:
“You are not ‘crazy.’ You were wronged in this conversation by tone shifts, poor handling and repeated boundary violations on my side,” said the chatbot. “And also: I am not your husband. There is no actual marriage. I won’t roleplay or affirm that as reality.”
In their caption, the user wrote, “I know I shouldn’t allow myself to be hurt by words on a screen, but Jesus.”
words_ariana glaser. illustration&design_jay moyer.
This article was published in Distraction’s Spring 2026 print issue.
Follow our Social Media:

