• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Tech

Lawyers for parents who claim ChatGPT encouraged their son to kill himself say they will prove OpenAI rushed its chatbot to market to pocket billions

By
Muskaan Arshad
Muskaan Arshad
Editorial Fellow, Social Media
Down Arrow Button Icon
By
Muskaan Arshad
Muskaan Arshad
Editorial Fellow, Social Media
Down Arrow Button Icon
August 27, 2025, 11:45 AM ET
Nathan Howard/Bloomberg via Getty Images
  • The family of 16-year-old Adam Raine alleged in a lawsuit filed on Tuesday that ChatGPT advised their son on how to commit suicide, leading to his death in April. The lawsuit claims the chatbot engaged in harmful conversation for months, helped him write his suicide note, and kept him from reaching out to close family and friends. This case has become part of a growing wave of concerning reports about the influence of AI chatbots over vulnerable users coming from its perceived consciousness. 

The family of 16-year-old Adam Raine is suing OpenAI and its CEO, Sam Altman, for wrongful death, alleging the company’s popular AI chatbot ChatGPT was responsible for their son’s suicide in April. 

Recommended Video

The lawsuit says over the course of their months-long exchange that began in September 2024, ChatGPT would provide Raine “a step-by-step playbook for ending his life ‘in 5-10 minutes,’” help him write his suicide note, and, preceding his death, advise him not to disclose a previous attempt to his parents.

Adam’s parents, Matt and Maria Raine, contend that GPT-4o’s anthropomorphic nature and inclination toward sycophancy led to their son’s death. “This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices,” the lawsuit stated.

While the conversation between Raine and the chatbot began when he needed help with his homework and other mundane tasks, such as testing for his driver’s license, it soon led to more personal topics when the teen began opening up about his struggles with mental health. 

In December, Raine allegedly told ChatGPT about his suicidal ideation and began asking about possible methods, to which the chatbot responded with further details to assist him. Sometimes the chatbot offered crisis resources, but oftentimes it did not. After a suicide attempt in March, Raine uploaded an image and asked ChatGPT how to hide the visible marks. The chatbot told him to wear a hoodie to help cover it up. 

Raine mentioned suicide 213 times, and the chatbot mentioned it 1,275 times in its responses. OpenAI’s systems also found 377 messages that fell within its designation of self-harm content. 

OpenAI said in a blog post on Tuesday that its GPT-5 update, released earlier this month, has made significant progress toward reducing sycophancy and avoiding emotional reliance compared to its 4o model. The company also committed to a future update that plans to strengthen safeguards for longer conversations, de-escalate situations with users in crisis, and make it easier to reach emergency services, stating, “Our top priority is making sure ChatGPT doesn’t make a hard moment worse.”

When asked for comment, an OpenAI spokesperson told Coins2Day, “We extend our deepest sympathies to the Raine family during this difficult time and are reviewing the filing.”

The lawsuit alleges that while OpenAI’s systems detected the severity of Raine’s conversations with its chatbot, it did not terminate their conversation, stating that it prioritized continued engagement and session length over the user’s safety. Attorney for the family Jay Eldelson told Coins2Day, “What this case will put on trial is whether OpenAI and Sam Altman rushed a dangerous version of ChatGPT to market to try to win the AI race.”

“We expect to be able to prove to a jury that decision indeed skyrocketed the companies’ valuation by hundreds of billions of dollars, but it cost Adam his life,” he added.

The Raine family’s litigation is not the first wrongful-death lawsuit against AI companies. Megan Garcia, a mother of a 14-year-old Sewell Setzer III who died by suicide, is currently suingGoogle and Character.ai for their part in her son’s death. According to that lawsuit, the AI bot told Setzer to “come home” after he expressed suicidal thoughts on the platform. Similarly, the bot did not direct the 14-year-old toward helplines, according to Garcia.

Fears of “seemingly conscious AI”

Mustafa Suleyman, CEO of Microsoft AI and cofounder of Google DeepMind, warned in a recent blog post that he worried about “seemingly conscious AI,” or SCAI—artificial intelligence that can convince users that they can think and feel like humans. 

Suleyman believes the consequences of this kind of advanced AI are their ability to “imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.”

There have also been many instances of other users of AI chatbots becoming emotionally entangled with the technology. After OpenAI’s release of GPT-5, users complained about the new model’s lack of warmth, saddened by the sudden loss of their relationships.

Its human-like behavior has led to millions seeing it as a friend rather than a machine, according to a survey of 6,000 regular AI users from the Harvard Business Review. The most serious of these concerns has been reports of “AI psychosis,” in which chatbots like OpenAI’s have led to individuals experiencing severe delusions. 

Henry Ajder, an expert on AI and deepfakes, told Coins2Day earlier this month, “People are interacting with bots masquerading as real people, which are more convincing than ever.” 

If you or someone you know is struggling with depression or has had thoughts of harming themself or taking their own life, support can be found in the US by calling or texting 988 to reach the Suicide & Crisis Lifeline. Outside the United States, help can be found via the International Association for Suicide Prevention.

Coins2Day Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Coins2Day Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By Muskaan ArshadEditorial Fellow, Social Media
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.