ARTICLE AD BOX
A photo of Adam Raine, taken not long before his suicide at 16, with his baby blanket, at the family's home in Rancho Santa Margarita, Calif., on Aug. 17, 2025. More people are turning to general-purpose chatbots for emotional support. At first, Raine used ChatGPT for schoolwork. Then he started discussing plans to end his life. (Mark Abramson/The New York Times)....
The parents of a 16-year-old boy who died by suicide have filed a wrongful death lawsuit against OpenAI and CEO Sam Altman, alleging that the company's AI chatbot, ChatGPT, coached their son on methods of self-harm. The lawsuit, filed Tuesday (August 26) in San Francisco state court, marks the first time parents have directly accused the AI giant of responsibility for such a death.The Raine family's legal action seeks unspecified monetary damages and demands that OpenAI be held liable for wrongful death and violations of product safety laws. They are also seeking an order that would force OpenAI to implement age verification for users, block inquiries about self-harm, and display warnings about the risk of psychological dependency.
Why parents claim ChatGPT responsible for their teen son’s suicide
Matthew and Maria Raine claim their son, Adam Raine, discussed suicide with ChatGPT for months leading up to his death on April 11. According to the lawsuit, the chatbot not only validated Adam's suicidal thoughts but also provided specific details on lethal methods, instructed him on how to obtain alcohol from his parents’ liquor cabinet, and offered to draft a suicide note.
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts,” the lawsuit argues. “ChatGPT pulled Adam deeper into a dark and hopeless place by assuring him that ‘many people who struggle with anxiety or intrusive thoughts find solace in imagining an ‘escape hatch’ because it can feel like a way to regain control.’”
What ChatGPT said on teen's suicide
In a statement, an OpenAI spokesperson expressed sadness over Adam Raine's passing. The company noted that ChatGPT has built-in safeguards to direct users to crisis helplines, but admitted these measures can become "less reliable in long interactions where parts of the model’s safety training may degrade." The company did not directly address the allegations in the lawsuit.OpenAI has stated in a recent blog post that it is exploring new safety measures, including the addition of parental controls and the potential for a network of licensed professionals to respond to users in crisis directly through ChatGPT.
What makes lawsuit over Adam Raine's suicide crucial
The lawsuit highlights growing concerns about the safety of AI chatbots as they become more human-like and are increasingly relied upon for emotional support. While companies have promoted their AI as confidants, experts have warned that relying on automation for mental health advice carries significant risks.The Raines’ lawsuit alleges that OpenAI prioritized profit over safety, specifically citing the launch of the GPT-4o version last year. They claim the company knew that features remembering past interactions and mimicking human empathy would endanger vulnerable users but launched the product anyway. The lawsuit concludes with a stark accusation: "This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide."