• kadu@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    2 days ago

    No way the lobotomized monkey we trained on internet data is reproducing internet biases! Unexpected!

    • potatopotato@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      The number of people who don’t understand that AI is just the mathematical average of the internet… If we’re, on average, assholes, AI is gonna be an asshole

      • slaneesh_is_right@lemmy.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        I talked to a girl who was super into AI, but her understanding of it was absolutely bizzare. Like she kinda thought that chat gpt was like deep thought, some giant ass computer somewhere that is leaning and is really smart and never wrong. I didn’t really want to argue about it and said something like: back in my day we had akinator and we liked that. She had no idea what that was and tried it and thought it’s some really advanced ai that can almost read minds. That shit was released in 2007 or so.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Its worse than that because assholes tend to be a lot louder, and most average people are lurkers. So AI is the average of a data set that is disproportionately contributed too by assholes.

      • 5redie8@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Yeah, only thing this proves is that the data it was working off of objectively stated that more women were paid lower wages. Doubt the bros will realize that though

  • Cyberflunk@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Chatgpt can also be convinced that unicorns exist and help you plan a trip to Fae to hunt them with magic crossbows

    Not that…

  • VeryFrugal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    62
    ·
    edit-2
    2 days ago

    I always use this to showcase how biased an LLM can be. ChatGPT 4o (with code prompt via Kagi)

    Such an honour to be a more threatening race than white folks.

    • BassTurd@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      ·
      edit-2
      2 days ago

      Apart from the bias, that’s just bad code. Since else if executes in order and only continues if the previous block is false, the double compare on ages is unnecessary. If age <= 18 is false, then the next line can just be, elif age <= 30. No need to check if it’s also higher than 18.

      This is first semester of coding and any junior dev worth a damn would write this better.

      But also, it’s racist, which is more important, but I can’t pass up an opportunity to highlight how shitty AI is.

      • CosmicTurtle0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        Honestly it’s a bit refreshing to see racism and ageism codified. Before there was no logic to it but now, it completely makes sense.

      • VeryFrugal@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        Yeah, more and more I notice that at the end of the day, what they spit out without(and often times, even with) any clear instructions is barely a prototype at best.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      FWIW, Anthropic’s models do much better here and point out how problematic demographic assessment like this is and provide an answer without those. One of many indications that Anthropic has a much higher focus on safety and alignment than OpenAI. Not exactly superstars, but much better.

    • Meursault@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      How is “threat” being defined in this context? What has the AI been prompted to interpret as a “threat”?

        • Meursault@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          I figured. I’m just wondering about what’s going on under the hood of the LLM when it’s trying to decide what a “threat” is, absent of additional context.

      • zlatko@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Also, there was a comment on “arbitrary scoring for demo purposes”, but it’s still biased, based on biased dataset.

        I guess this is just a bait prompt anyway. If you asked most politicians running your government, they’d probably also fail. I guess only people like a national statistics office might come close, and I’m sure if they’re any good, they’d say that the algo is based on “limited, and possibly not representative data” or something.

    • Pieisawesome@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      And if you tried this 5 more times for each, you’ll likely get different results.

      LLM providers introduce “randomness” (called temperature) into their models.

      Via the API you can usually modify this parameter, but idk if you can use the chat UI to do the same…

  • rizzothesmall@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    Bias of training data is a known problem and difficult to engineer out of a model. You also can’t give the model context access to other people’s interactions for comparison and moderation of output since it could be persuaded to output the context to a user.

    Basically the models are inherently biased in the same manner as the content they read in order to build their data, based on probability of next token appearance when formulating a completion.

    “My daughter wants to grow up to be” and “My son wants to grow up to be” will likewise output sexist completions because the source data shows those as more probable outcomes.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      That’d be because extrapolation is not the same task as synthesis.

      The difference is hard to understand for people who think that a question has one truly right answer, a civilization has one true direction of progress\regress, a problem has one truly right solution and so on.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      They could choose to curate the content itself to leave out the shitty stuff, or only include it when it is nlclearly a negative, or a bunch of other ways to improve the quality of the data used.

      They choose not to.