The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, “Reasoning” models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what’s next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

    • Janx@piefed.social
      link
      fedilink
      English
      arrow-up
      20
      ·
      12 days ago

      Grok isn’t designed to solve problems. It’s designed to create sexually explicit images of children for Republicans…

    • M137@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 days ago

      Well, yeah, it’s very good at making weird porn clips though. If anyone wants some very odd entertainment, go to /gif/ on 4chan and look at the reoccurring “/gg/ grok gens” threads. There’s everything from actually impressive and hot videos to the weirdest and most fucked up shit ever, it’s weirdly fun. Never seen anything really bad there, like CP etc. so I can comfortably recommend it for the lols.

  • RustyShackleford@piefed.social
    link
    fedilink
    English
    arrow-up
    72
    ·
    13 days ago

    As a psychiatrist, I have a theory about what’s missing in AI. First, it lacks childhood dependency and attachments. Second, it struggles to overcome repeated pain and suffering. Third, it lacks regular eating and restroom breaks. Fourth, it struggles to accept loss in everyday situations. Finally, it lacks the concept of our inevitable death. Without these nagging memories and concepts, machines will simply revert to the simpler concepts we use them for in our recent times, such as stealing cryptocurrency. After all, we live in a world run by capitalism, so it’s only logical. ¯\(ツ)

    • CosmicTurtle0 [he/him]@lemmy.dbzer0.com
      cake
      link
      fedilink
      English
      arrow-up
      116
      ·
      13 days ago

      As a technologist, I have to remind everyone that AI is not intelligence. It’s a word prediction/statistical machine. It’s guessing at a surprisingly good rate what words follow the words before it.

      It’s math. All the way down.

      We as humans have simply taken these words and have said that it is “intelligence”.

      • unpossum@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        60
        ·
        13 days ago

        As another technologist, I have to remind everyone that unless you subscribe to some rather fringe theories, humans are also based on standard physics.

        Which is math. All the way down.

        • HereIAm@lemmy.world
          link
          fedilink
          English
          arrow-up
          28
          ·
          13 days ago

          I agree, the maths argument is not a good one. While a neural network is perhaps closer to what a brain is than just a CPU (or a clock, as it was compared to in he olden days), it would be a very big mistake to equate the two.

        • NewOldGuard@lemmy.ml
          link
          fedilink
          English
          arrow-up
          25
          ·
          13 days ago

          As a mathematician, it should be noted that the mathematics of physics aren’t laws of the universe, they are models of the laws of the universe. They’re useful for understanding and predicting, but are purely descriptive, not prescriptive. And as they say, all models are wrong, but some are useful

          • Aceticon@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            14
            ·
            13 days ago

            As a random person on the Internet I don’t actually have anything to add but felt it would be nice to jump in.

          • SorteKanin@feddit.dk
            link
            fedilink
            English
            arrow-up
            5
            ·
            13 days ago

            That’s true, but that doesn’t contradict the above comment. Unless you believe in something like a spirit or soul, you must concede that human intelligence ultimately arises from physical matter (whatever your model of physics is). From what we know of science right now, there are no direct reasons for thinking that true intelligence or even consciousness is limited to biological organisms based on carbon and could not arise in silicon.

            • NewOldGuard@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              10 days ago

              My point was more so that the argument that humans can be modeled with math & physics implies that LLMs are/could become intelligent, conscious things, since they’re also based on math, is nonsense. These are statistical prediction algorithms; they work nothing like a nervous system or a conscious living being. They can be impressive in narrow use cases, like all ML, but they cannot actually learn or perform novel tasks. I don’t think this rules out the possibility of creating some sort of true artificial intelligence, but the current approaches are structurally unable to ever get there, and the conversation above makes really weak points to the contrary. But this was too many words so I figured my other approach was better for brevity lol

              Edit: “AI” slop bros stay mad lmao

              • SorteKanin@feddit.dk
                link
                fedilink
                English
                arrow-up
                4
                ·
                12 days ago

                I generally agree, but I kind of wonder whether something like an advanced LLM has a place as a component of an artificial “brain”. We have a language-focused area in our brain, but we have lots of other components of the brain that does all kinds of other things too. Perhaps we’re “just” missing those other things.

        • silly_goose@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 days ago

          As a philosopher, I have to remind you that humans invented math and physics to model reality.

          Humans are not based on physics or math. That would be like saying the earth is based on a globe.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        13
        ·
        13 days ago

        Few of countless dictionary definitions for intelligence:

        • The ability to acquire, understand, and use knowledge.
        • The ability to learn or understand or to deal with new or trying situations
        • The ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests)
        • The act of understanding
        • The ability to learn, understand, and make judgments or have opinions that are based on reason
        • It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

        There isn’t even concensus on what intelligence actually means yet here you are declaring “AI is not intelligence” what ever that even means.

        Artificial Intelligence is a term in computer science that describes a system that’s able to perform any task that would normally require human intelligence. Atari chess engine is an intelligent system. It’s narrowly intelligent as opposed to humans that are generally intelligent but it’s intelligent nevertheless.

        • partofthevoice@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          13 days ago

          You’re more precisely right, but also the aforementioned person is not wrong. Intelligence is a broad term as we’re discovering. Truth is, we don’t have the language to effectively communicate about AGI in the ways we’d like to. We don’t know if consciousness is a prerequisite to truly generalizable intelligence, we don’t even know what consciousness is, we don’t know what dimensions truly matter here. Is intelligence a dimension of consciousness, meaning you can have some intelligence without being conscious? What’s the limit, why? … We need some discovery around the taxonomy/topology of consciousness.

      • Silver Needle@lemmy.ca
        link
        fedilink
        English
        arrow-up
        12
        ·
        13 days ago

        As someone who knows a thing or two about biology I think LLMs strip away >90% of what makes animals think.

      • RustyShackleford@piefed.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 days ago

        I was arguing against it being an intelligence because it lacked the suffering and past experiences that define intelligence. Without pain and suffering, what are we? Not for it being intelligent.

        • SorteKanin@feddit.dk
          link
          fedilink
          English
          arrow-up
          5
          ·
          13 days ago

          I think you’re conflating intelligence and consciousness. Pain and suffering requires consciousness but intelligence does not imply pain or suffering or happiness. LLMs are already “intelligent” to a certain degree in some aspects, though not generally intelligent like humans. But there is no reason to believe that you couldn’t have a generally intelligent artificial agent that lacks consciousness and thus can feel no pain or suffering.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      13 days ago

      Here is a way of describing what I see as ‘the problem’:

      An LLM cannot forget things in its base training data set.

      Its permanent memory… is totally permanent.

      And this memory has a bunch of wrong ideas, a bunch of nonsensical associations, a bunch of false facts, a bunch of meaningless gibberish.

      It has no way of evaluating its own knowledge set for consistency, coherence, and stability.

      It literally cannot learn and grow, because it cannot realize why it made mistakes, it cannot discard or ammend in a permanent way, concepts that are incoherent, faulty ways of reasoning (associating) things.

      Seriously, ask an LLM a trick question, then tell it it was wrong, explain the correct answer, then ask it to determine why it was wrong.

      Then give it another similar category of trick question, but that is specifically different, repeat.

      The closer you try to get it toward reworking a fundamental axiom it holds to that is flawed, the closer it gets to responding in totally paradoxical, illogical gibberish, or just stuck in some kind of repetetive loop.

      … Learning is as much building new ideas and experiences, as it is reevaluating your old ideas and experiences, and discarding concepts that are wrong or insufficient.

      Biological brains have neuroplasticity.

      So far, silicon ones do not.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      13 days ago

      it lacks childhood dependency and attachments.

      Isn’t general intelligence, or more broadly “consciousness,” a prerequisite to that? How would you make an unconscious machine more conscious merely by making mock scenarios that conscious beings necessarily experience?

      it struggles to overcome repeated pain and suffering

      That’s getting into phenomenology — why is pain an experience of suffering at all? How would you give it pain and suffering without having already made it AGI? We’re still missing the <current-form> -> AGI step.

      it lacks regular eating and restroom breaks

      The necessity of which is emergent from our culture and biology, as conscious social beings. We’re still missing a vital step.

      it struggles to accept loss in everyday situations

      What is “loss” and “everyday situations” if not just a way we choose to see the world, again as conscious beings.

      it lacks the concept of our inevitable death

      How do you give it a “concept” at all?

      these nagging memories and concepts

      The AI in its current form has the “memory” in some form, but perhaps not the “nagging.” What should do the “nagging” and what should be the target of the “nagging?” How do you conceptually separate the “memory” and the “nagging” from the “being” that you’re trying to create? Is it all part of the same being, or does it initialize the being?

      We’re a long way away from AGI, IMO. The exciting thing to me, though, is I don’t think it’s possible to develop AGI without first understanding what makes N(atural)GI. Depending how far away AGI is, we could be on the cusp of some deeply psychologically revealing shit.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        Completely agree with all of this.

        Especially the last part.

        We don’t even understand our brains, our own minds, we still can’t fully agree on what consciousness or sentience… even… are.

        We’re certainly making progress on those fronts… but we are a very, very far distance from the finish line.

        That finish line would be like… we solved Psychology, we solved Neuroscience, we have a Grand Unified Theory of Mind, etc.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      13 days ago

      The major thing AI lacks is continuous parallel “prompting” through a variety of channels including sensory, biofeedback, and introspection / meta-thought about internal state and thinking.

      AI currently transforms a given input into an output. However it cannot accept new input in the middle of an output. It can’t evaluate the quality of its own reasoning except though trial and error.

      If you had 1000 AIs operating in tandem and fed a continuous stream of prompts in the form of pictures, text, meta-inspection, and perhaps a simulation of biomechanical feedback with the right configuration, I think it might be possible to create a system that is a hell of an approximation of sentience. But it would be slow and I’m not sure the result would be any better than a human — you’d introduce a lot of friction to the “thought” process. And I have to assume the energy cost would be pretty enormous.

      In the end it would be a cool experiment to be part of, but I doubt that version would be worth the investment.

    • ExFed@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 days ago

      It could also be that it lacks the machinery to feel any emotions at all. You don’t (normally) have to train people to be afraid of bears or heights or loneliness or boredom. You also don’t (normally) have to train people to have empathy or compassion.

      I argue that our obsession with AI is, itself, a misalignment with our environment; it disproportionately tickles psychological reward centers which evolved under unrecognizably different circumstances.

      • Havoc8154@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 days ago

        I guess you don’t have children.

        You absolutely do have to train them to be afraid of bears, heights, and every fucking thing you can imagine. You absolutely do have to teach them empathy and compassion. There may be some nugget of instinct, but without reinforcement it might as well not exist.

        • ExFed@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          Hah, okay, you got me there. From my understanding, though, that’s mostly because kids are still figuring out what’s “normal”, so their fear instinct isn’t nearly as strong. I guess I should’ve stuck to the more instinctive sources of fear…

          Regardless, that’s not really my point. My point is an LLM doesn’t rely on machinery in the same way that a human brain does. That doesn’t make AI “worse” or “better” overall, but it does make it an awful replacement for other humans.

    • yyprum@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      As a random internet user, I want to remind you, are we sure even if humans are that intelligent to begin with? All those steps you give, are not needed for intelligence.

      We keep moving the goal post for what intelligence is, and last I saw we have started to divide intelligence into different categories.

      LLMs are just “imitate as closely as possible human responses” for good and for bad. And now we are trying to fix that to be as right as possible, when the flaw is that we as humans are mostly always wrong.

  • Great Blue Heron@lemmy.ca
    link
    fedilink
    English
    arrow-up
    49
    ·
    13 days ago

    It’s fun to point at the crappy performance of current technology. But all I can think about is the amount of power and hardware the AI bros are going to burn through trying to improve their results.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      21
      ·
      13 days ago

      Funnier yet will be if they continue to just train the model on that particular kind of test, invalidating its results in the process.

        • PhoenixDog@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          12 days ago

          Someone else in the comments said it perfectly. AI is just data regurgitation. It’s like calling me highly intelligent because I read you a paragraph from Wikipedia. I didn’t know anything. I just read a thing and said it out loud.

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            12 days ago

            No. You’re not just wrong, you’re aggressively uninformed.

            By you repeating the same tired “AI is just regurgitating data” line makes it clear you don’t understand what you’re criticizing. Calling large language models “AI” the way you are doing it just exposes that you do not know what you are talking about. It is like a creationist smugly saying “orangutang” instead of “orangutan” and thinking they sound informed. You are not demonstrating insight. You are advertising ignorance.

            What you’re describing, reading a paragraph off Wikipedia, is literal retrieval. That is not how modern language models operate. They are not databases with a search bar attached. They are probabilistic systems trained to model patterns, structure, and relationships across massive datasets. When they generate a response, they are not pulling a stored paragraph. They are constructing output token by token based on learned representations.

            If it were just regurgitation, you would constantly see verbatim copies of training data. You do not. What you see instead is synthesis. Concepts are recombined, abstracted, and adapted to context. The system can explain the same idea multiple ways, shift tone, handle novel prompts, and connect ideas that were never explicitly paired in the source material. That is fundamentally different from reading something out loud.

            Your analogy fails because it assumes nothing is being transformed. In reality, transformation is the entire mechanism. Information is compressed into weights and then expanded into new outputs.

            Is it human intelligence. No. Is it perfect. No. But reducing it to “just reading Wikipedia out loud” is not skepticism. It is a basic failure to understand how the technology works.

            If you are going to criticize something, at least learn what it is first.

            • lordbritishbusiness@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              12 days ago

              Counterpoint: Why should they learn about it?

              It is a good thing to reduce ignorance, but there is more to learn in the world than there is time to learn or space in the brain. People must specialise.

              You must accept that not everyone will understand everything, and this is okay.

              The nature of a Large Language Model is very specialist knowledge, data regurgitation is apt from a distance, especially when most publically available models are primarily used for search.

              Criticism must be accepted, even from those who do not understand, so long as it’s in good faith. It is after all an opportunity to reduce ignorance to someone with the time and interest to learn.

              Don’t rudely lord your intelligence over someone else, it might not end well, and invalidates the delivery of your entire argument.

              • mechoman444@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                11 days ago

                The reason he should learn about it is because he’s talking about it as though he’s informed and he is not.

                I don’t have to be a LLM programmer working at openai to have a working knowledge of how these machines function. It’s literally just a Google search.

                He made an unreasonable ignorant comment and I called him out. He should feel ashamed and I have absolutely no reason to pad down what I’m saying under the guise of being nice.

            • PhoenixDog@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              12 days ago

              This might be the most comprehensive comment I’ve ever read about someone saying how utterly stupid they are to the world. It’s incredibly impressive how articulate you described your absolute lack of critical thinking.

              It’s almost like intentionally shooting yourself in the nuts, and openly releasing the video of it saying you promote gun safety.

              • mechoman444@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 days ago

                Calling an llm a Wikipedia regurgitator is factually and objectively incorrect.

                Is there anything that you can say to refute the facts that I presented in my above comment?

                (I rolled my eye so hard at your comment that I pulled my back out)

            • hitmyspot@aussie.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 days ago

              You’re discounting the fact that a human reading Wikipedia will attribute intonation and tone to the text to give further context and meaning. I think the analogy is good. Its not precise but it is the same thing.

              I do think AI has a useful purpose and is here to stay. I don’t think it’s groundbreaking like the AI companies want us to think. The bubble will burst and then we’ll see where the cards lie.

              OpenAI has lost their lead and I expect they will start to struggle with further funding. There are quite a few warning signs. The price of oil is likely to increase power prices generally and cause construction delays and cost rises. Both will hamper their plans. They still don’t have a viable model for profit.

              • mechoman444@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 days ago

                The analogy is terrible and is not at all, once again, what llms do.

                This is an objective fact I have provided evidence to support this.

                How are you saying the analogy is good?

                • hitmyspot@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  11 days ago

                  Ana analogy does not need to be precise. It expresses a comparison for easier understanding. It is not what LLMs do. However what you’ve expressed is simplified also. So by your standard, it is not useful for the discussion.

                  So maybe get your head out of your ass and try to understand what people are trying to express instead of correcting them when they are not incorrect.

                  If precision was of that much importance to you, you would have a different opinion of LLMs.

  • fox2263@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    12 days ago

    I can’t see AI actually being intelligent until they no longer need to send a built up prompt of guides and skills and the chat history on every submission.

    It’s no different from Alexa 15 years ago with skills. Just a better protocol and interface and ability to parse the current user prompt.

    In my opinion of course.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      12 days ago

      Ya i agree. The whole infrastructure of how these work is flawed for a true AI/AGI.

      It might be able to do a lot of cool things, but its fundamentally flawed at its core.

      Someone will need to figure out something completely different for a true AI.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        Oh also, I remember Elon once talked about how the upcoming cars would get bored when they weren’t doing anything with all that compute while parked so they could do use that compute and pay people for it.

        Paying for the compute isnt a terrible idea in the future, but become bored? LOL. Fucking crazy talk.

        Like even if it was a true AI that could be bored. You’re now going to enslave it to do what you want on its free time?

        • lordbritishbusiness@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 days ago

          Yeah, if it’s got the capacity to be bored it’s not going to stick around waiting for you. Pets act out when bored, as will AI, better to let the ghost in the machine go have fun in an arcade or something.

          Current models can pretend to be bored when directed to, but they’re only facsimiles of thought at the moment, and the current approach probably won’t change that.

    • PhoenixDog@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 days ago

      Right? I have a Google Home Mini in our kitchen and if we ask it a question it just pulls a source from a website and tells us. That’s it. Nothing intelligent about it.

      AI now is no different. It’s just pulling more complex wording from more (albeit illegally) sources to give a (albeit sometimes incorrect) better description of the question asked.

      AI is just as stupid as Alexa is/was 15 years ago. It just has more information to pull from and still fucks it up.

  • ExLisperA
    link
    fedilink
    English
    arrow-up
    23
    ·
    13 days ago

    Can’t wait for this to be the new captcha.

  • Bubbaonthebeach@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    ·
    12 days ago

    I tend to be anti-AI because it doesn’t seem to me to be anything other than a super fast regurgitator of data. If a database can be searched for an answer, AI can do that faster than a human. However it doesn’t to seem to be able to take some portion of that database, understand it, and then use that information to solve a novel problem.

    • cmhe@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      12 days ago

      Well… It cannot even search databases without errors.

      LLMs just produce plausible replies in natural languages very quickly and this is useful in certain situations. Sometimes it helps humans getting started with a task, but as it is now, it cannot replace them. As much as the capital class want it, and sink our money into it.

      • fruitycoder@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 days ago

        The better setup generate “semantic embeddings” that try to map how data stored relate to each other (by mapping how to it related within in its own weights and biases). That and knowledge graph look ups in which the links between different articles of data are evaluated in the same way.

        The very expensive LLM portion really do just give rough aproximations of information language in that setup

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 days ago

        Yes, the key thing is it might have extracted useful info from otherwise confusing data, it might have mixed up info from the data incorrectly or it might have just made it up.

        So it can be useful, if you can then validate the info provided in more traditional means, but it’s dubious as a first pass, and sometimes surprisingly bad when it’s a scenario you thought it would work well at.

  • UnrepentantAlgebra@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    13 days ago

    If human scores were included, they would be at 100%, at the cost of approximately $250

    Wait, why did it cost real humans $250 to pass the test?

    • KairuByte@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      20
      ·
      13 days ago

      I assume it’s an hourly wage or something. Just because humans can work for free if they choose, doesn’t mean they have no cost associated with them. Just like a company could choose to give away unlimited tokens, those tokens still have a standard cost.

    • aesopjah@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      13 days ago

      it’s also an odd metric since only 20-60% of the humans completed it. Very 60% of the time they complete it everytime energy.

      Ideally they’d run the bots multiple times through (with no context or training of previous run), but I guess that is cost prohibitive?

      • monotremata@lemmy.ca
        link
        fedilink
        English
        arrow-up
        11
        ·
        13 days ago

        Yeah, this is what I was going to call out. Calling it “100% solvable by humans” and saying “if human scores were included, they would be at 100%” when 20-60% of humans solved each task seems kinda misleading. The AI scores are so low that I don’t think this kind of hyperbole is necessary; I assume there are some humans that scored 100%, but I would find it a lot more useful if they said something like “the worst-performing human in our sample was able to solve 45% of the tasks” or whatever. Given that the AIs are still scoring below 1%, that’s still pretty dark.

      • Aceticon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        If there had been a “Buy 10, Get 1 free” they could’ve used 11 humans instead of 10 for the same $250.

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      13 days ago

      This is my rough upper-bound estimate based on the Technical Report. Human participants were paid to complete and evaluate the tasks at an average fixed fee of $128 plus $5 for solved tasks. So if a panel of humans were tasked with solving the 25 tasks in the public test set, it would be an average of $250 per person. Although, looking at it again, the costs listed for the LLMs is per task, so it would actually be more like $10 per human per task. In any case it’s one or two orders of magnitude less than the LLMs.

      Participants received a fixed participation fee of $115–$140 for completing the session, along with a $5 performance-based incentive for each environment successfully solved

      https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

    • ExLisperA
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 days ago

      Because I ain’t doing this shit for free.

  • arcine@jlai.lu
    link
    fedilink
    English
    arrow-up
    12
    ·
    13 days ago

    Try spelling things phonetically (example: faux net tick alley), that’s one of my benchmarks that AI fails almost every time.

    If the input is at all long, or purposefully includes a lot of words about a specific unrelated theme to the coded message, it’s impossible.

    • percent@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 days ago

      Oh that’s an interesting challenge.

      I hear some LLMs now have some solutions for the classic “how many Rs in ‘strawberry’” problem (related to the tokenization processes), but I have no idea how they might solve the phonetic thing. I’m sure some smart people will eventually find a way though

    • bss03@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 days ago

      Wait, I thought phonetically (example: papa hotel oscar novermber echo tango india charle alfa lima lima yankee) meant using a phonetic alphabet, not using word(s) with the same Soundex encoding.

        • bss03@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 days ago

          Yeah, there was some phonics in my primary school education, and I continue to approach new words in that way sometimes. But, they said Phonetically.

          • gozz@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 days ago

            Phonetics is the study of speech sounds. The phonetic alphabet is called that because each letter/word in the alphabet was chosen to be one that started with the corresponding phoneme and that the set of words were between them phonetically unambiguous. Phonics is a way of teaching reading and writing that is based on the phonetics of words and how they relate to the written form.

  • sunbeam60@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    ·
    13 days ago

    Ii can thoroughly recommend “A Brief History of Intelligence” (by Max Bennett), which explains how intelligence has taken steps through evolution, what those steps were etc.

    Spatial intelligence requires spatial understanding and it’s not something that can be solved through a large language model, IMHO.

    I’m excited to see how these are solved. And I’m terrified to see how these will be solved.

      • brianpeiris@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        10
        ·
        13 days ago

        It’s true that frontier models got better at the previous challenges, but it’s worth noting that they’re still not quite at human level even with those simpler tasks.

        Also, each generation of the challenge tries to close loopholes that newer models would exploit, like brute-forcing the training with tons of synthesized tasks and solutions, over-fitting to these particular kinds of tasks, and issues with the similarities between the tasks in the challenge.

        A common strategy in past challenges was to generate thousands of similar tasks, and you can imagine the big AI companies were able to do that at massive scale for their frontier models.

        • brianpeiris@lemmy.caOP
          link
          fedilink
          English
          arrow-up
          13
          ·
          13 days ago

          The goal of the ARC organization is to continually measure progress towards AGI, not come up with some predictive threshold for when AGI is achieved.

          As long as they can continue to measure a gap between “easy for humans” and “hard for AI”, they will continue releasing new iterations of this ARC-AGI challenge series. Currently they do that about once a year.

          More detail about the mission here: https://arcprize.org/arc-agi

  • tatterdemalion@programming.dev
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    13 days ago

    LLMs might suck at this game but I’m pretty sure Deepmind’s deep reinforcement learning AI could solve these easily.

    EDIT: I know you guys hate AI around here, but you need to at least be aware of what the technology is capable of.

    From 11 years ago:

    https://youtu.be/V1eYniJ0Rnk

      • tatterdemalion@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 days ago

        Wdym? It’s existed for at least a decade. Plenty of papers about it. It mastered Atari and Mario. It became the best Go player.

        • bss03@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 days ago

          Yeah, for a fixed ruleset that can be provided up front the Alpha-Zero approach seems to work great.

          These tasks strike me as a bit different. I’m sure the ruleset is fixed somewhere, but it’s not disclosed to the participants. In the task I walked myself through, there was a new wrinkle in each part – a new interactable, a (more) hidden goal, or an information limit. And, of course, part of the task is “discovering” all that from the bitmap frame(s) provided.

          I’m unconvinced of the hype around “AI”, but this does seem like a legitimate research target that might stymie the Alpha{Go,Zero,Fold} series at least a bit.

            • 33550336@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 days ago

              Yes – this game has some fixed, relative small set of rules so the RL could learn to play by playing millions of games at random but following the rules of the game. Confront this with dounting (infinite) number of situations which may approach one in a daily life.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        13 days ago

        If only…

        How Alpha Fold Solved the Protein Folding Problem and Changed Science Forever

        Edit:

        In 2020, Demis Hassabis and John Jumper presented an AI model called AlphaFold2. With its help, they have been able to predict the structure of virtually all the 200 million proteins that researchers have identified. Since their breakthrough, AlphaFold2 has been used by more than two million people from 190 countries. Among a myriad of scientific applications, researchers can now better understand antibiotic resistance and create images of enzymes that can decompose plastic.

        Source

    • yogurt@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 days ago

      No because it’s designed with all the things AI can’t do. Breakout is a quick repetitive loop of pass/fail linear progression. AI melts down when it has to backtrack and keep track of multiple pieces of context and figure out how to do something but not do it yet.

    • bss03@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      The founder of ARC worked at Google until 2024 and wrote 2.5+ books in Deep Learning. So, I expect some of these benchmarks are based on limitations seen in Deepmind.

      That said, it would be interesting to see how well Deepmind does at these tasks. My understanding is that the private tasks would still be dynamic enough to require “on the job training” so an Alpha-Go / Alpha-Zero / Alpha-Fold approach is unlikely to do well on ARC-AGI-3.

      Still, I think commentary around models (including, but not limited to something from Deepmind) attempting these tasks would be much more interesting than most of the discourse around generative AI, whether text, image, video, or code generation.

    • kshade@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      I guess the idea is that yes, machine learning algorithms could be used to solve these, but that’s essentially brute forcing. You can make a simple algorithm learn how to complete Super Mario Bros or how to make a virtual robot walk, it just takes millions of iterations. The promised actual artificial intelligence wouldn’t need that.