The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 days ago

    Thats not how llm safety guards work. Just like any guard it’ll affect legitimate uses too as llms can’t really reason and understand nuance.

    • ganryuu@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      That seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        It’s more an argument against using LLMs for things they’re not intended for. LLMs aren’t therapists, they’re text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.

        The real issue here is the parents either weren’t noticing or not responding to the kid’s pain. They should be the first line of defense, and enlist professional help for things they can’t handle themselves.

      • yermaw@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        You’re absolutely right, but the counterpoint that always wins - “there’s money to be made fuck you and fuck your humanity”

        • ganryuu@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          I’m honestly at a loss here, I didn’t intend to argue in bad faith, so I don’t see how I moved any goal post