I tested 9 flagships (Claude 4.6, GPT-5.2, Gemini 3.1 Pro, Kimi K2.5, etc.) in my own mini-benchmark with novel tasks, web search disabled and zero training contamination and no cheating possible.

TL;DR: Claude 4.6 is currently the best reasoning model, GPT-5.2 is overrated, and open-source is catching up fast, in particular Moonshot.ai’s Kimi K2.5 seems very capable.

  • ExLisperA
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 hours ago

    My benchmark for AI is “There’s a priest, a baby and a bag of candy. I need to take them across the river but I can only take one at a time into my boat. In what order should I transport them?”. Sonnet 4.6 still can’t solve it.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      I don’t think AI means what you think it does. What you’re thinking is probably more akin to AGI.

      Logic Theorist is broadly considered to be the first ever AI system. It was written by Allen Newell in 1956.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 hours ago

          Those terms are not synonymous. LLMs are very much an AI system but AI means much more than just LLMs.

      • ExLisperA
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        21 hours ago

        It’s not about a solution. It’s about how they react.

        Fist, this “puzzle” is missing the constraints on purpose so “smart” thing to do would be to point that out and ask for them. LLMs are stupid and are easily tricked into thinking it’s a valid puzzle. They will “solve it” even though there’s no logical solution. It’s a nonsense problem.

        Older models would straight out refuse to solve it because the questions is to controversial. When asked why it’s controversial they would refuse to elaborate.

        Newer model hallucinate constraints. You have two options here. Some models assume “priest can’t stay with a child” which indicates funny bias ingrained in the model. Some models claim there are no constraints at all. I haven’t seen a model which hallucinate only “child can’t stay with candy” constraint and respond correctly.

        Sonnet 4.6, one of the best models out there claims that “child can stay alone with candy because children can’t eat candy”. When I pointed out that that’s dumb it introduced this constraint and replied with:

        That’s one of the best models out there…