• lennybird@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    Remember IBM’s Dr. Watson? I do think an AI double-checking and advising audits of patient charts in a hospital or physicians office could be hugely beneficial. Medical errors account for many outright deaths let alone other fuckups.

    I know this isn’t what Oz is proposing, which sounds very dumb.

    • FatCrab@slrpnk.net
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      Computer assisted diagnosis is already an ubiquitous thing in medicine, it just doesn’t have LLM hype bubble behind it even though it very much incorporates AI solutions. Nevertheless, effectively all implementations never diagnose and rather make suggestions to medical practitioners. The biggest hurdle to uptake is usually giving users clearly and quickly the underlying cause for the suggestion (transparency and interpretability is a longstanding field of research here).

      • lennybird@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Do you know of a specific software that double-checks charting by physicians and nurses and orders for labs, procedures relative to patient symptoms or lab values, etc., and returns some sort of probablistic analysis of their ailments, or identifies potential medical error decision-making? Genuine question because at least with my experience in the industry I haven’t, but I also haven’t worked with Epic software specifically.

        • FatCrab@slrpnk.net
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          I used to work for Philips and that is exactly a lot of what the patient care informatics businesses (and the other informatics businesses really) were working on for quite a while. The biggest hold up when I was there was usually a combination of two things: regulatory process (very important) and mercurial business leadership (Philips has one of the worst and most dysfunctional management cultures, from c-suite all the way down, that I’ve ever seen).

          • lennybird@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            That’s really interesting, thanks. I’m curious how long ago this was as neither I nor my partner (who works in the clinical side of healthcare) have seen anything deployed at least at the facilities we’ve been at.

    • CharlesDarwin@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      I thought there were quite a few problems with Watson, but, TBF, I did not follow it closely.

      However, I do like the idea of using LLM(s) as another pair of eyes in the system, if you will. But only as another tool, not a crutch, and certainly not making any final calls. LLMs should be treated exactly like you’d treat a spelling checker or a grammar checker - if it’s pointing something out, take a closer look, perhaps. But to completely cede your understanding of something (say, spelling or grammar, or in this case, medicine that people take years to get certified in) to a tool is rather foolish.

      • lennybird@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I couldn’t have said it better myself and completely agree. Use as an assistant; just not the main driver or final decision-maker.