• AntY@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    Where I live, there’s been a rise in people eating poisonous mushrooms. I suspect that it might have to do with AI use. No proof though.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      Can’t read the article because it’s paywalled but I can’t imagine they are actually building power stations with AI, that will just be a snappy headline. Maybe the AI is laying out the floor plans or something, but nuclear power stations are intensely regulated. If you want to build a new reactor design, or even if you want to change an existing design very slightly, it has to go through no end of safety checks. There’s no way that an AI or even a human would be allowed to design a reactor, and then have it be built with no checks.

      • Tehhund@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Actually they’re using it to generate documents required by regulations. Which is its own problem: since LLMs hallucinate, that means the documentation may not reflect what’s actually going on in the plant, potentially bypassing the regulations.

  • Fedditor385@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    I guess my opinion will be hugely unpopular but it is what it is - I’d argue it’s natural selection and not an issue of LLM’s in general.

    Healthy and (emotionally) inteligent humans don’t get killed by LLM’s. They know it’s a tool, they know it’s just software. It’s not a person and it does not guarantee correctness.

    Getting killed because LLM’s told you so - the person was in mental distress already and ready to harm themselves. The LLM’s are basically just the straw that broke the camels back. Same thing with physical danger. If you believe drinking bleach helps with back pain - there is nothing that can save you from your own stupidity.

    LLM’s are like a knife. It can be a tool to prepare food or it can be a weapon. It’s up to the one using it.

    • Ural@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Healthy and emotionally intelligent humans will be killed constantly over the next few years and decades as a result of data centers poisoning the air in their communities (see South Memphis, TN), not to mention the general environmental impacts on the climate caused by the obscene power requirements. It’s not an issue exclusive to LLMs, lots of unregulated industries cause reckless amounts of pollution and put needless strain on our electrical grids, but LLMs definitely fit into that category.

      • Fedditor385@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Agree, but then you would need to count a lot of things, and many of them would be general mass comodity like cars, electricity, heating… besides LLM’s being the new thing killing us, we have stuff killing us for ages…

  • jayambi@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    I’m asking myself how could we track how many woudln’t have made suicide withoud consulting an LLM? that would be the more interesting number. And how many lives did LLMs save? so to say a kill/death ratio?

    • JoshuaFalken@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      Kill death ratio - or rather, kill save ratio - would be rather difficult to obtain and more difficult still to appreciate and be able to say if it is good or bad based solely on the ratio.

      Fritz Haber is one example of this that comes to mind. Awarded a Nobel Prize a century ago for chemistry developments in fertilizer, used today in a quarter of food growth. A decade or so later he weaponized chlorine gas, and his work was later used in the creation of Zyklon B.

      By ratio, Haber is surely a hero, but when considering the sheer numbers of the dead left in his wake, it is a more complex question.

      This is one of those things that makes me almost hope for an afterlife where all information is available from which truth may be derived. Who shot JFK? How did the pyramids get built? If life’s biggest answer is forty-two, what is the question?

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I can’t really see how we could measure that. How do you distinguish between people who are alive because they’re just alive and would have been anyway and people who are alive because the AI convinced them not to kill themselves?

      I suppose the experiment would be to get a bunch of depressed people split them into two groups and then have one group talk to the AI and the other group not, then see if the suicide rate was statistically different. However I feel it would be difficult to get funding for this.

  • Melobol@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 days ago

    I believe it is not the chatbots falut. They are just the symptoms of a broken system. And while we can harp on the unethically sourced materials they trained them on, LLM at the end of the day is only a tool.

    These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

    We need a strong social network, where people actually care and help each other. You know all the idealistic things that capitalism and social media is “destroying”.

    Blaming AI is just a smoke screen. Or a red cape to taunt the bull before it gets stabbed to death.

    • batboy5955@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      Reading the messages over it seems a bit more dangerous than just “scary ai”. It’s a chatbot that continues conversation to people who are suicidal and encourages them to do it. At least have a little safeguard for these situations.

      “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”

      • Melobol@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 days ago

        Again llm is a misused tool. They do not need llm they need psychological help.
        The problem is that they go and use these flawed tools that were not designed to handle these kind of use cases. Shoulda been? Maybe. But it is not the AIs fault that we are failing to be a society.
        You can’t blame the bridges because some people jumped off them. They serve a different reason.
        We are failing those people and forcing them to tirn to llms.
        We are the reason they are desperate - llm didn’t break up with them or make them loose their homes or became isolated from other humans.
        It is the humans fault and if we can’t recognize that - we might as well end it for all.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 days ago

          I think both of your arguments in this thread have merit. You are correct that it is a misused tool, and you are correct that the better solution is a more compassionate society. The other person is also correct that we can and do at least make attempts to make such tools less available as paths to self harm. Since you used the analogy of people jumping off bridges, I have lived near bridges where this was common so barriers and nets were put up to make it difficult for anyone but the most determined to use it as a path to suicide. We are indeed failing people in a society that puts profit over human life first, but even in a more idealized society mental health issues and attempts at suicide would still happen and to not fail those people we would still need to do things like erect barriers and safeguards to prevent self-harm. In my eyes both of you are correct and it is not an either or issue as much as it is a “por que no los dos?” issue. Why not build a better society and still build in safeguards?

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    I don’t think “AI” is the problem here. Watching the watchers doesn’t hurt, but I think the AI-haters are grasping for straws here. In fact, when comparing to the actual suicide numbers, this “AI is causing Suicide !” seems a bit contrived/hollow, tbh. Were the haters also as active in noticing the 49 thousand suicide deaths every year, or did they just now find it a problem ?

    Besides, if there’s a criminal here, it would be the private corp that provided the AI service, not a broad category of technology - “AI”. People that hate AI, seem to really just hate the effects of Capitalism.

    https://www.cdc.gov/suicide/facts/data.html (This is for US alone !) overview

    If image not shown: Over 49,000 people died by suicide in 2023. 1 death every 11 minutes. Many adults think about suicide or attempt suicide. 12.8 million seriously thought about suicide. 3.7 million made a plan for suicide. 1.5 million attempted suicide.

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 days ago

      Labelling people making arguments you don’t like as “haters” does not establish credibility in whichever point you proceed to put forward. It signals you did not attempt to find rationality in their words.

      Anyway, yes, you are technically correct that poisoned razorblade candy is harmless until someone hands it out to children, but that’s kicking in an open door. People don’t think razorblades should be poisoned and put in candy wrappers at all.

      Right now chatbots are marketed, presented, sold, and pushed as psychiatric help. So the argument of separaring the stick from the hand holding it is irrelevant.

    • finalarbiter@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 days ago

      Not really equivalent. Most videogames don’t actively encourage you to pursue violence outside of the game, even if they don’t explicitly have a big warning saying “don’t fucking shoot people”.

      Several of the big LLMs, by virtue of their programming to be somewhat sycophantic, have encouraged users to follow through on suicidal ideation or self-harm when the user shared those thoughts in chat. One can argue that OpenAI and others have implemented ‘safety’ features for these scenarios, but the fact is that these systems have already lead to several deaths and continue to do so through encouragement of the user to harm themselves or other.