• cm0002@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      18
      ·
      22 days ago

      That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO

      • Carrolade@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        22 days ago

        Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.

        Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          22 days ago

          Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

          Interesting that…

          Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”

          • Carrolade@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            22 days ago

            Yeah I caught that too, I’d be curious to know more about what specifically they meant by that.

            Being able to link all of the words that have a similar meaning, say, nearby, close, adjacent, proximal, side-by-side, etc and realize they all share something in common could be done in many ways. Some would require an abstract understanding of what spatial distance actually is, an understanding of physical reality. Others would not, one could simply make use of word adjacency, noticing that all of these words are frequently used alongside certain other words. This would not be abstract, it’d be more of a simple sum of clear correlations. You could call this mathematical framework a universal language if you wanted.

            Ultimately, a person learns meaning and then applies language to it. When I’m a baby I see my mother, and know my mother is something that exists. Then I learn the word “mother” and apply it to her. The abstract comes first. Can an LLM do something similar despite having never seen anything that isn’t a word or number?

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              22 days ago

              I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.

              • Carrolade@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                22 days ago

                Exactly. It’s sort of like a massively scaled up example of the blind man and the elephant.

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              21 days ago

              Can an LLM do something similar despite having never seen anything that isn’t a word or number?

              No.

          • MTK@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            22 days ago

            Yeah but I think this is still the same, just not a single language. It might think in some mix of languages (which you can actuaysee sometimes if you push certain LLMs to their limit and they start producing mixed language responses.)

            But it still has limitations because of the structure in language. This is actually a thing that humans have as well, the limiting of abstract thought through internal monologue thinking

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              22 days ago

              Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.

          • TimewornTraveler@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            20 days ago

            wow an AI researcher over hyping his own product. he’s just waxing poetic .

            we don’t even have a good sense of what thought IS, please tell Claude to call the philosophers because apparently he’s figured out consciousness

      • Shanmugha@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        22 days ago

        It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete

          • Shanmugha@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            21 days ago

            Redditor as “a person active on Reddit”? I don’t see where I was talking about humans. Or am I misunderstanding the question?

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              21 days ago

              This dumbass is convinced that humans are chatbots likely because chatbots are his only friends.

              • Shanmugha@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                21 days ago

                Sounds scary. I read a story the other day about a dude who really got himself a discord server with chatbots, and that was his main place of “communicating” and “socializing”

                • aesthelete@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  21 days ago

                  This anecdote has the makings of a “men will literally x instead of going to therapy” joke.

                  On a more serious note though, I really wish people would stop anthropomorphisizing these things, especially when they do it while dehumanizing people and devaluing humanity as a whole.

                  But that’s unlikely to happen. It’s the same type of people that thought the mind was a machine in the first industrial revolution, and then a CPU in the third…now they think it’s an LLM.

                  LLMs could have some better (if narrower) applications if we could stop being so stupid as to inject them into places where they are obviously counterproductive.

                  • LarmyOfLone@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    19 days ago

                    they do it while dehumanizing people and devaluing humanity

                    You’re making wild assumptions about people who disagree with your opinions. How ironic you accuse “them” of dehumanizing people.

                    But I do agree that this gets to the core of the matter, the shock of a piece of software being able to produce intelligent text while clearly not having general intelligence is quite the shock. Same with creativity, while the entertainment industry produced equally empty content slop using human labor it’s a painful shock to our identity as humans. I suspect this is a reaction to disillusionment and the intellectual pain that comes from it.

                    My opinion on LLMs is rather nuanced, the worst possible outcome I can foresee is the anti-AI crowd helping the oligarchs to establish IP ownership of all LLM models and monopolizing the tools, so that only they can have access to the “means of generation”. While the rest has to pay for the privilege of using it.

      • LarmyOfLone@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 days ago

        I mean it implies that they CAN start with the conclusion or the “thought” and then generate the text to verbalize that.

        It’s shocking to what length humans will go to explain how their wetware neural network is fundamentally different and it’s impossible for LLMs to think or reason in any way. Honestly LLMs teach us more about human intelligence (or the lack thereof) than machine intelligence. Like obi wan said, “The ability to speak does not make one intelligent” haha.