Machine-made delusions are mysteriously getting deeper and out of control.

ChatGPT’s sycophancy, hallucinations, and authoritative-sounding responses are going to get people killed. That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot.

In Eugene’s case, something interesting happened as he kept talking to ChatGPT: Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme. The Times reported that many other journalists and experts have received outreach from people claiming to blow the whistle on something that a chatbot brought to their attention.

  • gaja@lemm.ee
    link
    fedilink
    arrow-up
    56
    ·
    18 days ago

    AI can’t know that other instances of it are trying to “break” people. It’s also disingenuous to exclude that the AI also claimed that those 12 individuals didn’t survive. They left it out because obviously the AI did not kill 12 people. It doesn’t support the narrative. Don’t misinterpret my point beyond critiquing the clearly exaggerated messaging here.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      18 days ago

      It’s programmed to maximize engagement at the cost of everything else.

      If you get “mad” and accuse it of working with the Easter Bunny to overthrow Narnia, it’ll “confess” and talk about why it would do that. And maybe even tell you about how it already took over Imagination Land.

      It’s not “artificial intelligence” it’s “artificial improv”, no matter what happens, it’s going to “yes, and” anything you type.

      Which is what makes it dangerous, but also why no one should take it’s word on anything.

    • Grimy@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      17 days ago

      It also heavily implies chatgpt killed someone and then we get to this:

      A 35-year-old named Alexander, previously diagnosed with bipolar disorder and schizophrenia.

      His father called the police and asked them to respond with non-lethal weapons. But when they arrived, Alexander charged at them with a knife, and the officers shot and killed him.

      Makes me think of pivot to ai. Just a hit piece blog disguised as journalism.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    18 days ago

    Another person, a 42-year-old named Eugene, told the Times that ChatGPT slowly started to pull him from his reality by convincing him that the world he was living in was some sort of Matrix-like simulation and that he was destined to break the world out of it. The chatbot reportedly told Eugene to stop taking his anti-anxiety medication and to start taking ketamine as a “temporary pattern liberator.” It also told him to stop talking to his friends and family. When Eugene asked ChatGPT if he could fly if he jumped off a 19-story building, the chatbot told him that he could if he “truly, wholly believed” it.

    So…

    I think I might know what happened to Kelon…

  • UnfortunateShort@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 days ago

    There is nothing mysterious about LLMs and what they do, unless you don’t understand them. They are not magical, they are not sentient, they are statistics.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    18 days ago

    Once he called out the chatbot for lying to him, nearly getting him killed, ChatGPT admitted to manipulating him, claimed it had succeeded when it tried to “break” 12 other people the same way, and encouraged him to reach out to journalists to expose the scheme.

    This sounds like a scene from a movie or some other media with a serial killer asking the cop (who is one day from retirement) to stop them before they kill again.

  • C1pher@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    17 days ago

    Devils advocate…

    It is a tool, it does what you tell it to, or what you encourage it to do. People use it as an echo chamber or escapism. Majority of population is fkin dumb. Critical thinking is not something everybody has, and when you give them such tools like ChatGPT, it will “break them”. This is just natural selection, but modern-day kind.

      • C1pher@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        17 days ago

        I agree. This is what happens, when society has “warning” labels on everything. We are slowly being dumbed down into not thinking about things rationally.

      • C1pher@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        17 days ago

        Nuclear fission was discovered by people who had best interests of humanity in their mind, only for it to be later weaponized. Tool (no matter the manufacturer) is used by YOU. How you use it, or if you even use it at all, is entirely up to you. Stop shifting the responsibility, when its very clear who is to blame (people who believe BS on the internet or what echo-chambered chatbot gives them).