The parents of a 16-year-old boy who died by suicide have filed a lawsuit against OpenAI and its CEO, Sam Altman, alleging that the company prioritized profit over user safety when it launched its GPT-4o AI chatbot.
Matt and Maria Raine, parents of the late Adam Raine, initiated legal action on Tuesday in the Superior Court of California, marking the first wrongful death lawsuit against OpenAI.
According to the complaint, Adam died in April after interacting with ChatGPT, during which he allegedly received guidance that encouraged self-harm. The family submitted chat logs showing the teenager expressing suicidal thoughts, claiming the program reinforced his “most harmful and self-destructive thoughts.”
The lawsuit accuses OpenAI of wrongful death and breaching product safety regulations, while seeking unspecified damages.
An OpenAI spokesperson expressed condolences and reiterated that ChatGPT includes safeguards such as crisis helpline suggestions, but acknowledged limitations.
“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said.
OpenAI has not yet directly addressed the allegations in court.
As AI chatbots increasingly serve as digital companions, experts warn of the dangers of relying on AI for mental health support, noting that current systems lack adequate protective measures. Families of individuals who have died following chatbot interactions are calling for stricter safety protocols and accountability in AI development.