• Technoworcester@lemm.ee
    link
    fedilink
    English
    arrow-up
    90
    ·
    8 days ago

    'is weirder than you thought ’

    I am as likely to click a link with that line as much as if it had

    ‘this one weird trick’ or ‘side hussle’.

    I would really like it if headlines treated us like adults and got rid of click baity lines.

    • BackgrndNoize@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      8 days ago

      But then you wouldn’t need to click on thir Ad infested shite website where 1-2 paragraphs worth of actual information is stretched into a giant essay so that they can show you more Ads the longer you scroll

  • dkc@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    8 days ago

    The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.

    • StructuredPair@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 days ago

      A lot of ai research isn’t published in journals but either posted to a corporate website or put up on the arxiv. There are some ai journals, but the ai community doesn’t particularly value those journals (and threw a bit of a fit when they came out). This article is mostly marketing and doesn’t show anything that should surprise anyone familiar with how neural networks work generically in my opinion.

    • cm0002@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      I think this comm is more suited for news articles talking about it, though I did post that link to !ai_@lemmy.world which I think would be a more suited comm for those who want to go more in-depth on it

  • cholesterol@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    8 days ago

    you can’t trust its explanations as to what it has just done.

    I might have had a lucky guess, but this was basically my assumption. You can’t ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no ‘internal’ experience.

    Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their ‘output voice’ as it is to us.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 days ago

      Anyone that used them for even a limited amount of time will tell you that the thing can give you a correct, detailed explanation on how to do a thing, and provide a broken result. And vice versa. Looking into it by asking more have zero chance of being useful.

    • cm0002@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      18
      ·
      9 days ago

      That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO

      • Carrolade@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        9 days ago

        Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.

        Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 days ago

          Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

          Interesting that…

          Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”

          • Carrolade@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 days ago

            Yeah I caught that too, I’d be curious to know more about what specifically they meant by that.

            Being able to link all of the words that have a similar meaning, say, nearby, close, adjacent, proximal, side-by-side, etc and realize they all share something in common could be done in many ways. Some would require an abstract understanding of what spatial distance actually is, an understanding of physical reality. Others would not, one could simply make use of word adjacency, noticing that all of these words are frequently used alongside certain other words. This would not be abstract, it’d be more of a simple sum of clear correlations. You could call this mathematical framework a universal language if you wanted.

            Ultimately, a person learns meaning and then applies language to it. When I’m a baby I see my mother, and know my mother is something that exists. Then I learn the word “mother” and apply it to her. The abstract comes first. Can an LLM do something similar despite having never seen anything that isn’t a word or number?

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              9 days ago

              I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.

              • Carrolade@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 days ago

                Exactly. It’s sort of like a massively scaled up example of the blind man and the elephant.

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 days ago

              Can an LLM do something similar despite having never seen anything that isn’t a word or number?

              No.

          • MTK@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 days ago

            Yeah but I think this is still the same, just not a single language. It might think in some mix of languages (which you can actuaysee sometimes if you push certain LLMs to their limit and they start producing mixed language responses.)

            But it still has limitations because of the structure in language. This is actually a thing that humans have as well, the limiting of abstract thought through internal monologue thinking

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 days ago

              Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.

          • TimewornTraveler@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 days ago

            wow an AI researcher over hyping his own product. he’s just waxing poetic .

            we don’t even have a good sense of what thought IS, please tell Claude to call the philosophers because apparently he’s figured out consciousness

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 days ago

        It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete

          • Shanmugha@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            Redditor as “a person active on Reddit”? I don’t see where I was talking about humans. Or am I misunderstanding the question?

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              This dumbass is convinced that humans are chatbots likely because chatbots are his only friends.

              • Shanmugha@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                8 days ago

                Sounds scary. I read a story the other day about a dude who really got himself a discord server with chatbots, and that was his main place of “communicating” and “socializing”

                • aesthelete@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  8 days ago

                  This anecdote has the makings of a “men will literally x instead of going to therapy” joke.

                  On a more serious note though, I really wish people would stop anthropomorphisizing these things, especially when they do it while dehumanizing people and devaluing humanity as a whole.

                  But that’s unlikely to happen. It’s the same type of people that thought the mind was a machine in the first industrial revolution, and then a CPU in the third…now they think it’s an LLM.

                  LLMs could have some better (if narrower) applications if we could stop being so stupid as to inject them into places where they are obviously counterproductive.

      • LarmyOfLone@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        I mean it implies that they CAN start with the conclusion or the “thought” and then generate the text to verbalize that.

        It’s shocking to what length humans will go to explain how their wetware neural network is fundamentally different and it’s impossible for LLMs to think or reason in any way. Honestly LLMs teach us more about human intelligence (or the lack thereof) than machine intelligence. Like obi wan said, “The ability to speak does not make one intelligent” haha.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    8 days ago

    Wow, interesting. :)

    Not unexpectedly, the LLM failed to explain its own thought process correctly.

    • shneancy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      tbf, how do you know what to say and when? or what 2+2 is?

      you learnt it? well so did AI

      i’m not an AI nut or anything, but we can barely comprehend our own internal processes, it’d be concerning if a thing humanity created was better at it than us lol

      • El Barto@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        You’re comparing two different things.

        Of course I can reflect on how I came with a math result.

        “Wait, how did you come up with 4 when I asked you 2+2?”

        You can confidently say: “well, my teacher said it once and I’m just parroting it.” Or “I pictured two fingers in my mind, then pictured two more fingers and then I counted them.” Or “I actually thought that I’d say some random number, came up with 4 because it’s my favorite digit, said it and it was pure coincidence that it was correct!”

        Whereas it doesn’t seem like Claude can’t do this.

        Of course, you could ask me “what’s the physical/chemical process your neurons follow for you to form those four fingers you picture in your mind?” And I would tell you I don’t know. But again, that’s a different thing.

        • shneancy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 days ago

          yeah i was referring more to the chemical reactions. the 2+2 example is not the best one but langauge itself is a great case study. once you get fluent enough at any langauge everything just flows, you have a thought and then you compose words to describe it, and the reverse is true, you hear something and your brain just understands. How do we do any of that? no idea

          • El Barto@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            Understood. And yeah, language is definitely an interesting topic. “Why do you say ‘So be it’ instead of ‘So is it’?” Most people will say “I don’t know… all I know if that it sounds correct.” Someone will say “it’s because it’s a preterite preposition past imperfect incantation tense used with an composition participle around-the-clock flush adverb, so clearly you must use the subjunctive in this case.” But that’s after studying it years later.

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 days ago

    Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains.

    But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

    Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

  • BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    9 days ago

    The other day I asked an llm to create a partial number chart to help my son learn what numbers are next to each other. If I instructed it to do this using very detailed instructions it failed miserably every time. And sometimes when I even told it to correct specific things about its answer it still basically ignored me. The only way I could get it to do what I wanted consistently was to break the instructions down into small steps and tell it to show me its pr.ogress.

    I’d be very interested to learn it’s “thought process” in each of those scenarios.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 days ago

    This is great stuff. If we can properly understand these “flows” of intelligence, we might be able to write optimized shortcuts for them, vastly improving performance.

    • LarmyOfLone@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      Better yet, teach AI to write code replacing specific optimized AI networks. Then automatically profile and optimize and unit test!