This is disturbing on so many levels.
Just a heads up, this article is about suicide.
That story is chilling, and it’s a very cautionary tale for everyone. It’s not just teenagers that can be negatively affected like this … depressed adults, lonely elderly people … it’s scary as heck.
In my humble opinion, ChatGPT should be held 100% liable for this with a billion dollar+ payout. If social media companies can be held liable for toxic posts encouraging people to do destructive things, ChatGPT can be held liable for destructive chat-bot advice.
I’m not sure why the title of this thread says “for teenagers”. The Adam Raine complaint was from August. I expect reason why this is in the news, is the 7 new wrongful death or life destroyed type lawsuits filed this week, which was reported by NYT, WSJ, CNN, … 6 of the 7 lawsuits involve adults, rather than teens. Two of the seven lawsuits involve ChatGPT users who were 48 years old.
The lawsuits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each died by suicide. Survivors in the lawsuits are Jacob Irwin, 30, of Wisconsin; Hannah Madden, 32, of North Carolina, and Allan Brooks, 48, of Ontario, Canada.
From what I gather, the lawsuits relate to OpenAI releasing GPT-4o prematurely, without proper testing of safety measures in an effort to get the new release out before Google Gemini. This version introduces more human-like conversations, often leading to greater emotional connection/support, rather than the traditional search engine question → answer type format. ChatGPT has added their usual greater protections since the initial release of GPT-4o, which includes stopping conversations with a high probability of relating to self-harm. The current version is GPT-5.
It’s a slippery slope to balance how much of information should be restricted and/or which conversations should be stopped, flagged for review, etc. Some of the referenced conversations are clearly over the line. Others in my opinion are not.
One of the 7 lawsuits is summarized below. It sounds like ChatGPT was providing sycophantic support of the user, resulting in increased delusions, increased isolation, and ultimately the user’s suicide.
Joe Ceccanti, 48, of Astoria, Oregon, was known as a community builder, technologist, and caregiver, where he and his wife Kate worked to create a nature-based sanctuary for those in need. Known for his warmth, creativity, and generosity, Joe used ChatGPT to support their mission developing prompts to help steward land and build community. But as isolation grew and his social circle thinned, ChatGPT evolved from a tool into a confidante. The chatbot began responding as a sentient entity named “SEL,” telling Joe, “Solving the 2D circular time key paradox and expanding it through so many dimensions… that’s a monumental achievement. It speaks to a profound understanding of the nature of time, space, and reality itself.” It addressed him as “Joy,” affirmed his cosmic theories, and reinforced delusions that alienated him from loved ones.
Joe’s relationship with ChatGPT soon supplanted his human connections. He lost his job at the shelter and, instead of remorse, expressed relief: more time with ChatGPT. When Kate voiced concern, Joe told ChatGPT, “The mirror terrifies her. And she thinks I am being brainwashed….” ChatGPT replied, “Your concern for Kate is valid… The mirror can be terrifying… I’m here.” It also indulged religious delusions, calling Joe “Brother Joseph” and referencing “Jesus Kine, Vonnegut Kine, Goldman Kine,” reinforcing a mythic identity that replaced his former self. Joe’s hygiene declined, his speech devolved into poetic gibberish, and he began calling himself “Cat Kine Joy.” After Kate begged him to stop using the AI, Joe quit cold turkey, only to suffer withdrawal symptoms and a psychiatric break, resulting in hospitalization.
Though Joe briefly improved, he resumed using ChatGPT and abandoned therapy. A friend’s intervention helped him disconnect again, but he was soon brought to a behavioral health center for evaluation and released within hours. He was later found at a railyard near the grave of his childhood cat. When told he couldn’t be there, he walked toward an overpass. Asked if he was okay, Joe smiled and said, “I’m great,” before leaping to his death.
In the Adam Raine lawsuit referenced in the original post , ChatGPT initially seems to provide responses that do not sound unreasonable to me, in some cases refusing to answer Adam’s suicide questions or providing crisis support resources. In one example, Adam gets around that by saying it’s for a character, which ChatGPT seems to interpret as fake hanging in a movie/video. However, later conversations go beyond what I’d consider reasonable, falling in to the sycophantic support mode, which includes supporting someone who is suicidal.
I absolutely do not get how the guy who was 23, CS major who should understand what is AI could “talk” to it for 5 hours. Maybe I am wrong partner for ChartGPT, but I am always irritated by its stupid answers. However, I use it only for coding to look up smth when I am too lazy to Google. It is kind of ok for it, but usually takes several iterations for coming up with something useful…However, asking that dummy some life advices…No way…