• kopasz7@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      26
      ·
      edit-2
      3 days ago

      You can only lie if you know what’s true. This is bullshitting all the way down that sometines happens to sound true, sometimes it doesn’t.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Yeah it’s just token prediction all the way down. Asking it repeatedly to not do something might have even made it more likely to predict tokens that would do that thing.

        • MrsDoyle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Oh ha ha, so it’s like a toddler? You have to be very careful not to tell toddlers NOT to do a thing, because they will definitely do that thing. “Don’t touch the hot pan.” Toddler touches the hot pan.

          The theory is that they don’t hear the word “don’t”, just the subsequent command. My theory is that the toddler brain goes, “why?” and proceeds to run a test to find out.

          In either scenario, screaming ensues.

        • staircase@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          3 days ago

          I understand where you’re coming from, but I don’t agree it’s about semantics; it’s about devaluation of communication. LLMs and their makers threaten that in multiple ways. Thinking of it as “lying” is one of them.

          • prole@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            OK sure. I was just using the wording from the article to make a point, I wasn’t trying to get into a discussion about whether “lying” requires intent.

        • Corbin@programming.dev
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 days ago

          You probably should have used semantics to communicate if you wanted your semantics to be unambiguous. Instead you used mere syntax and hoped that the reader would assign the same semantics that you had used. (This is apropos because language models also use syntax alone and have no semantics.)

    • untorquer@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      That or the company selling the AI (well, all of them) have pushed their product with the messaging that it’s trustworthy enough to be used recklessly.

      Train on human data and you receive human behavior and speech patterns. Lying or not it leads people to be deceived in a very insidious way.