The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
Not encouraging users to kill themselves is “ruining it”? Lmao
Thats not how llm safety guards work. Just like any guard it’ll affect legitimate uses too as llms can’t really reason and understand nuance.
That seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?
It’s more an argument against using LLMs for things they’re not intended for. LLMs aren’t therapists, they’re text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.
The real issue here is the parents either weren’t noticing or not responding to the kid’s pain. They should be the first line of defense, and enlist professional help for things they can’t handle themselves.
You’re absolutely right, but the counterpoint that always wins - “there’s money to be made fuck you and fuck your humanity”
Can’t argue there…
I’m not gonna fall for your goal post move sorry
I’m honestly at a loss here, I didn’t intend to argue in bad faith, so I don’t see how I moved any goal post