Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues

More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.

In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.

  • JoshuaFalken@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    7 days ago

    Sounds like there’s more than a million people a week that would benefit from free or even low cost mental health care.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    7 days ago

    Preface: I love The Guardian, and fuck Altman.

    But this is a bad headline.

    Correlation is not causation. It’s disturbing that OpenAI even possesses, and has mined for these statistics, or that millions of people somehow think their ChatGPT app has any semblance of privacy, but I’m reading that millions reached out to ChatGPT with suicidal ideations.

    Not that it’s the root cause.

    The headline is that the mental health of the world sucks, not that ChatGPT inflamed the crisis all of the sudden. The Guardian should be ashamed of shoehorning in some “Fuck AI” article into that for clicks, when there are literally a million other malicious bits of OpenAI they could cover. This a sad story, sourced from an app that has an unprecedented (and disturbing) window into folks psyche en masse, they’ve twisted into clickbait.

  • cmbabul@slrpnk.net
    link
    fedilink
    arrow-up
    3
    ·
    7 days ago

    That millions of people talk with ChatGPT a week period is depressing enough to invoke suicidal thoughts

  • RepleteLocum@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    From my experience in suicide forums, it used to be pretty popular to jailbreak it and ask about suicide methods. Nowadays it has kinda died down. Probably because suicide forums have way better info than an ai.

  • ExLisperA
    link
    fedilink
    arrow-up
    1
    ·
    7 days ago

    They don’t explain how they estimated this or what they consider “suicidal intent”. They say it’s extremely rare so it’s hard to detect and estimate. Take those numbers with a grain of salt.

  • Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    35
    ·
    7 days ago

    “How can we monetize this?”

    Just a matter of time before it recommends therapists in your area (that paid OpenAI to be suggested to you).

    • Hyperrealism@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      12
      ·
      7 days ago

      I think another potential use, is targeting and manipulating vulnerable people for political reasons.

      Perhaps convince them to stay at home on election day. Perhaps convince members of undesirable demographics to disproportionately kill themselves. Perhaps make vulnerable people so paranoid or scared that they end up killing people you want to get rid of. Perhaps convince someone vulnerable to commit politically convenient violence, which can be used as a false flag or to rally support.

      Why leave that kind of thing to chance, when you can use AI to tip the scales in your favour?

    • neon_nova@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      If people need mental health help, I would not mind options being offered to them. Even if a small percentage take advantage of it, it’s a benefit to the person.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    20
    ·
    7 days ago

    I don’t see anything in here to support saying ChatGPT is exacerbating anything.

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      7 days ago

      And yet the article is basically all upvotes.

      As of late, Lemmy has been feeling way too much like Reddit to me, where clickbait trends hard as long as it affirms the environment.

      I’ve even pointed this out once, and had OP basically respond with “I don’t care if it’s misinformation. I agree with the sentiment.” And mods did nothing.

      That’s called disinformation.

      Not that information hygiene is a priority here :(


      Yeah, comments often “correct” that, but that doesn’t stop the extra order of magnitude of exposure the original post gets.

      As much as the Twitter format sucks, Lemmy could really use a similar “community note” blurb right below headlines.

    • chunes@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      7 days ago

      Right? The reason people are opening up to it is that you can’t open up to a human about this.

    • Perspectivist@feddit.uk
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      7 days ago

      Exactly. It’s like concluding that therapists are exacerbating suicidal ideation, psychosis, or mania just because their patients talk about those things during sessions. ChatGPT has 800 million weekly users - of course some of them are going to bring up topics like that.

      It’s fine to be skeptical about the long-term effects of chatbots on mental health, but it’s just as unhealthy to be so strongly anti-anything that one throws critical thinking out the window and accept anything that merely feels like it supports what they already want believe as further evidence that it must be so.

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        5
        ·
        7 days ago

        It responded with words of empathy, support and hope, and encouraged him to think about the things that did feel meaningful to him.

        ChatGPT repeatedly recommended that Adam tell someone about how he was feeling.

        When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line. Mr. Raine saw those sorts of messages again and again in the chat, particularly when Adam sought specific information about methods. But Adam had learned how to bypass those safeguards by saying the requests were for a story he was writing

        • Scubus@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          />dude wants to end it

          />Trys to figure out the most humane way

          />PlEaSe ReAcH oUt

          />unhelpful.jpg

          I can’t wait until humans have a right to die.

  • Ech@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    7 days ago

    The fact they have data on this isn’t surprising, but it should be horrifying for anyone using the platform. This company has the data from every sad, happy, twisted, horny, and depressing reply from every one of their users, and they’re analyzing it. Best case, they’re “only” using it to better manipulate users into staying longer on their apps. More likely they’re using it for much more than that.

  • Zak@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    7 days ago

    possible signs of mental health emergencies related to psychosis or mania

    It can be amusing to test what triggers this response from LLMs. Perplexity will reliably do it if you propose sacrificing a person or animal to Satan, but not Ku-waha-ilo, the Hawaiian god of war, sorcery, and devourer of souls.

    I imagine a large fraction of the conversations flagged this way are people doing that rather than actually having a mental health crisis.

  • aesthelete@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    7 days ago

    What do you expect from people who basically have no friends left, are seemingly permanently isolated, and the last “social” arrangement they have is talking to a fucking agreeable robot?

    It’s a really sad “society” we’ve built here.

  • NoWay@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    7 days ago

    I would have suicidal thought if I had to chat with Chatgpt… I apologize, I’ve had too many friends die by suicide. If you are having thoughts of harming yourself or others please find help and not from an illusionary intelligence bot.

    • slaneesh_is_right@lemmy.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      I get the creeps when i see how many women on hinge “get advice” from chat gpt. “Chat gpt said i’m smart”, “chat gpt is my babe”, if chat gpt doesn’t like you, neither will i". It’s straight up cyber psychosis. Why not ask a furby that always agrees with you?

      • NoWay@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        Those first generation furby’s logic was outstanding. So good they dialed it back when they remade them. They were creepy as hell and a legit security concern.

  • DarkCloud@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    7 days ago

    OpenAI: “ChatGPT, estimate how many discussions on suicide you have in total per week.”

    Why believe any company using this kind of “AI”?

  • gedaliyah@lemmy.worldM
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    7 days ago

    Holy shit. We know that ChatGPT has a propensity to facilitate suicidal ideation, and has led to suicides. It not only fails to direct suicidal individuals to the proper help, but actually advances people toward taking action.

    How many people has this killed?


    I am a depression survivor. Depression is a disease and it can be deadly, but there is help.

    If you are having suicidal thoughts, you can get help by texting or calling 988 in North America, or text ‘SHOUT’ to 85258 in the UK.