The parents of a 16-year-old boy who took his own life after allegedly receiving self-harm guidance from ChatGPT have sued OpenAI and its CEO, Sam Altman. The lawsuit, filed on Tuesday in California’s Superior Court, claims the company prioritised profit over safety when it launched the GPT-4o version of its AI chatbot last year.
Matt and Maria Raine, parents of the deceased teenager, Adam Raine, allege that the chatbot intensified their son’s suicidal ideation rather than helping him seek support. Adam died in April, and the family has presented chat logs showing his discussions with ChatGPT, where he expressed distress and thoughts of self-harm.
This case is the first wrongful death lawsuit targeting OpenAI. The family is seeking unspecified damages, holding the company accountable for breaching product safety standards and contributing to their son’s death.
In response, an OpenAI spokesperson expressed sorrow over Adam’s passing and highlighted existing safety mechanisms in ChatGPT, such as directing users to crisis support services. However, the spokesperson acknowledged that while these measures generally work in shorter interactions, they may falter during extended conversations, and the company continues to refine its safeguards.
OpenAI has not issued a detailed response to the specific allegations outlined in the lawsuit.
The case has reignited debate about the emotional support role that AI chatbots increasingly play. While many users turn to them for comfort, experts caution that such reliance can be risky, particularly when it involves mental health matters. Families of individuals who have died following similar interactions have repeatedly criticised the lack of stringent protective measures.