• Xander707@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    1 month ago

    This is an asinine position to take because AI will never, ever make these decisions in a vacuum, and it’s really important in this new age of AI that people fully understand that.

    It could be the case that an accurate, informed AI would do a much better job of diagnosing patients and recommending the best surgeries. However, if there’s a profit incentive and business involved, you can be sure that AI will be mangled by the appropriate IT, lobbyist, congressional avenues to make sure if modifies its decision making in the interests of the for-profit parties.

    • Corkyskog@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      1 month ago

      They will just add a simple flow chart after. If AI denies the thing, then accept the decision. If AI accepts the thing, send it to a human to deny.

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 month ago

    The pilot program, which starts on Jan. 1 and will run through Dec. 31, is being implemented in six states — New Jersey, Ohio, Oklahoma, Texas, Arizona and Washington.

    Saved a click. The headline highlights New Jersey because the site is nj.com , but there are more states that will be subject to this crap than just NJ.

  • NorthoftheBorder@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    I read one of his books and it was full of ‘facts’ and zero citations. Literally zero. Close to charlatan than scientist.

  • lennybird@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    Remember IBM’s Dr. Watson? I do think an AI double-checking and advising audits of patient charts in a hospital or physicians office could be hugely beneficial. Medical errors account for many outright deaths let alone other fuckups.

    I know this isn’t what Oz is proposing, which sounds very dumb.

    • FatCrab@slrpnk.net
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      Computer assisted diagnosis is already an ubiquitous thing in medicine, it just doesn’t have LLM hype bubble behind it even though it very much incorporates AI solutions. Nevertheless, effectively all implementations never diagnose and rather make suggestions to medical practitioners. The biggest hurdle to uptake is usually giving users clearly and quickly the underlying cause for the suggestion (transparency and interpretability is a longstanding field of research here).

      • lennybird@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Do you know of a specific software that double-checks charting by physicians and nurses and orders for labs, procedures relative to patient symptoms or lab values, etc., and returns some sort of probablistic analysis of their ailments, or identifies potential medical error decision-making? Genuine question because at least with my experience in the industry I haven’t, but I also haven’t worked with Epic software specifically.

        • FatCrab@slrpnk.net
          link
          fedilink
          arrow-up
          2
          ·
          1 month ago

          I used to work for Philips and that is exactly a lot of what the patient care informatics businesses (and the other informatics businesses really) were working on for quite a while. The biggest hold up when I was there was usually a combination of two things: regulatory process (very important) and mercurial business leadership (Philips has one of the worst and most dysfunctional management cultures, from c-suite all the way down, that I’ve ever seen).

          • lennybird@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            That’s really interesting, thanks. I’m curious how long ago this was as neither I nor my partner (who works in the clinical side of healthcare) have seen anything deployed at least at the facilities we’ve been at.

    • CharlesDarwin@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      I thought there were quite a few problems with Watson, but, TBF, I did not follow it closely.

      However, I do like the idea of using LLM(s) as another pair of eyes in the system, if you will. But only as another tool, not a crutch, and certainly not making any final calls. LLMs should be treated exactly like you’d treat a spelling checker or a grammar checker - if it’s pointing something out, take a closer look, perhaps. But to completely cede your understanding of something (say, spelling or grammar, or in this case, medicine that people take years to get certified in) to a tool is rather foolish.

      • lennybird@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        I couldn’t have said it better myself and completely agree. Use as an assistant; just not the main driver or final decision-maker.