• Emerald@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    4 days ago

    Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?

  • Darkard@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    5 days ago

    All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.

    To use one to give advice on something as important as drug abuse recovery is simply insanity.

  • TimewornTraveler@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can’t really verify it or not. Gotta stay skeptical and all that.

    • Joeffect@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      It’s not ai… It’s your predictive text on steroids… So yeah… Believe it… If you understand it’s not doing anything more than that you can understand why and how it makes stuff up…

  • Gorilladrums@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don’t.

    These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they’ve been trained on and the parameters they’ve been given. You can think of their results as “targeted randomness” which is why their results are close or sound convincing but are never quite right.

    That’s because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that’s about it. They should never be used for anything serious like medical, legal, or life advice.

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 days ago

    Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that

    If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.

    ChatGPT isn’t anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it’s not hard.

    So if you wrote an article about how “gpt said this” or “gpt said that” you better include the full context or I’ll assume you are 100% bullshit

  • ivanafterall ☑️@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?