• lad@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 days ago

    Thag might be okay if what said GPT produces would be reliable and reproducible, not to mention providing valid reasoning. It’s just not there, far from it

    • gens@programming.dev
      link
      fedilink
      arrow-up
      6
      ·
      5 days ago

      It’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.

      There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        5 days ago

        Not only there’s a cure, it’s already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don’t and complain instead.