• psmgx@lemmy.world
    link
    fedilink
    arrow-up
    67
    ·
    4 months ago

    “Sorry, we’ll format correctly in JSON this time.”

    [Proceeds to shit out the exact same garbage output]

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    4 months ago

    Funny thing is correct json is easy to “force” with grammar-based sampling (aka it literally can’t output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that…

    A conspiratorial part of me thinks that’s on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of “we’re almost at AGI, I just need another trillion to scale up with no other improvements!”

  • borth@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    The AI probably: Well, I might have made up responses before, but now that “make up responses” is in the prompt, I will definitely make up responses now.