• Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      60
      ·
      9 months ago

      I suspect it’s more like “use the tool correctly or it will give bad results”.

      Like, LLMs are a marvel of engineering! But they’re also completely unreliable for use cases where you need consistent, logical results. So maybe we shouldn’t use them in places where we need consistent, logical results. That makes them unsafe for use in most business.

      • outhouseperilous@lemmy.dbzer0.comBanned from community
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 months ago

        There are like twelve genuine use cases and because of the cult of the LLM bro 9 of those are negated by weird blind faith. Two more are crimes against humanity.

      • StarvingMartist@sh.itjust.worksOP
        link
        fedilink
        English
        arrow-up
        16
        ·
        9 months ago

        I’m not sure their intent, but I follow this guy on bluesky, he’s super pro open source and worked on a bunch of Google projects back in the day so I think if anything he might just be making fun of vibe coders

    • haungack@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Likewise, instruct the AI to break the word down into letters one per line first, and then they get it right more often. I think that’s the point the post is trying to make.

      The letter counting issue is actually a fundamental problem of whole-word or subword-tokenization that’s had an obvious solution since ~2016, and i don’t get why commercial AI won’t implement a solution. Probably because it’s a lot of training code complexity (but not much compute) for solving a very small problem.

  • maria [she/her]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    9 months ago

    i kno peeps will get mad but imma comment anyway!!! u cnt stop me!

    i think dis is kindsa real-… *vine boom* 💥🤨🤨🤨

    dis funi got so many - like - sub-layers 🧅

    observe — — — the layers — — — (if u care)
    • obvious reference to peeps dismissin llms for not bein able to answer spelling questions
    • funi fake-thing, cuz programmin is actulli preddi useful - one has to kno whad its gud for tho
    • secret funi: python is suuuuupr good at countin lettrs "strawberry".count("r") while current llms r not (largely due to their tokenization step, making them literally unable to count the letters, but they can still count occurances of words)
    • the funi could also be seen in a way, where the poster got into coding via llms, then realized thad this “coding” is actually not as easy as he thought…

    anyway - i believe thad using a rule-based query-interpreation system (like siri or googles query-specific UI) with llms as a fallback gives much improved human-input-handlin-systems.

    besides thad - i dun see much use quite yet

    (i hope shareholders-chan is fine with thad >~< )