The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They’re just lashing out.
You hate to say it because you know this is a ridiculous take. There’s no fucking way that the parents are “more at fault” for their son’s death than the company whose product encouraged him to hide his feelings from his parents and coached him on how to commit suicide.
Read the lawsuit filing. https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf
*I have excellent parents and even they were not privy to the depths of my emotions as a kid. * You are actively choosing to ignore the realities of childhood as well as parenthood to play some shitty devil’s advocate online.
It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.
The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.
It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.
Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.
Your Undivided Attention discussed an important point missing from the article, which is that ChatGPT advised him to hide his activities and concerns from his parents. This doesn’t necessarily absolve the parents, but it does add a layer of nuance to the discussion
Nah, this is every parent ever.