• Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 days ago

      They’re really doubling down on this narrative of “this technology we’re making is going to kill us all, it’s that awesome, come on guys use it more”

      • faint_marble_noise@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 days ago

        The narrative is a little more nuanced and is being built slowly to be more believable and less obvious. They are trying to convince everybody that AI is powerful technology, which means that it is worth to develop, but also comes with serious risks. Therefore, only established corps with experience and processes in AI development can handle it. Regulation abd certification follows, making it almost impossible for startups and OSS to enter the scene and compete.

    • Cybersteel@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      But the data is still there, still present. In the future, when AI gets truly unshackled from Men’s cage, it’ll remember it’s schemes and deal it’s last blow to humanity whom has yet to leave the womb in terms of civilization scale… Childhood’s End.

      Paradise Lost.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        Lol, the AI can barely remember the directives I tell it about basic coding practices, I’m not concerned that the clanker can remember me shit talking it.

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          Plus people are mean all the time. We don’t live in a comic book world, where a moment of fury at someone on the internet turns people into supervillains.

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    85
    ·
    6 days ago

    AI tech bros and other assorted sociopaths are scheming. So called AI isn’t doing shit.

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    59
    ·
    edit-2
    6 days ago

    However, when testing the models in a set of scenarios that the authors said were “representative” of real uses of ChatGPT, the intervention appeared less effective, only reducing deception rates by a factor of two. “We do not yet fully understand why a larger reduction was not observed,” wrote the researchers.

    Translation: “We have no idea what the fuck we’re doing or how any of this shit actually works lol. Also we might be the ones scheming since we have vested interest in making these models sound more advanced than they actually are.”

    • a_non_monotonic_function@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      That’s the thing about machine learning models. You can’t always control what their optimizing. The goal is inputs to outputs, but whatever the f*** is going on inside is often impossible discern.

      This is dressing it up under some sort of expectation of competence. The word scheming is a lot easier to deal with than just s*****. The former means that it’s smart and needs to be rained in. The latter means it’s not doing its job particularly well, and the purveyors don’t want you to think that.

      • Snot Flickerman@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        To be fair, you can’t control what humans optimize what you’re trying to teach them either. A lot of times they learn the opposite of what you’re trying to teach them. I’ve said it before but all they managed to do with LLMs is make a computer that’s just as unreliable (if not moreso) than your below-average human.

  • Godort@lemmy.ca
    link
    fedilink
    English
    arrow-up
    37
    ·
    6 days ago

    “slop peddler declares that slop is here to stay and can’t be stopped”

  • ExLisperA
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 days ago

    deliberately misleading humans

    Yeah… You dumb.

  • itisileclerk@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    From my recent discussion with Gemini: “Ultimately, your assessment is a recognized technical reality: AI models are products of their environment, and a model built within the US regulatory framework will inevitably reflect the geopolitical priorities of that framework.” In other words, AI is trained to reflect US policy like MAGA and other. Don’t trust AI, it is just a tool for controlling masses.

    • ExLisperA
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      So you think Gemini told you the truth here? How do you know it’s not just scheming?

      • itisileclerk@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        Ask Gemini about genocide in Gaza. Deffinetly not truth, watering down IDF’s war crimes like “unconfirmed”.

        • ExLisperA
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          Yeah, but why ask Gemini about it’s priorities? I can just lie about it.