

The goal of the token predictor is to produce coherent language - not factual information. If you can understand what it’s saying, it’s working - even if the content of what it says is factually inaccurate.


The goal of the token predictor is to produce coherent language - not factual information. If you can understand what it’s saying, it’s working - even if the content of what it says is factually inaccurate.


No, I completely agree. My personal view is that these systems are more intelligent than the haters give them credit for, but I think this simplistic “it’s just autocomplete” take is a solid heuristic for most people - keeps them from losing sight of what they’re actually dealing with.
I’d say LLMs are more intelligent than they have any right to be, but not nearly as intelligent as they can sometimes appear.
The comparison I keep coming back to: an LLM is like cruise control that’s turned out to be a surprisingly decent driver too. Steering and following traffic rules was never the goal of its developers, yet here we are. There’s nothing inherently wrong with letting it take the wheel for a bit, but it needs constant supervision - and people have to remember it’s still just cruise control, not autopilot.
The second we forget that is when we end up in the ditch. You can’t then climb out shaking your fist at the sky, yelling that the autopilot failed, when you never had autopilot to begin with.


The vast majority of people aren’t educated on the correct terminology here. They don’t know the difference between AI, LLM, AGI, ASI, etc. That makes it near impossible to have real discussions about AI - everyone’s constantly talking past each other and using the same words to mean completely different things.
My original comment wasn’t even challenging their claim that “AI doesn’t work.” I was just pointing out that AI and LLM aren’t synonymous. It’s my one-man fight against sloppy, imprecise use of language. I’d rather engage with what people are actually saying, not with what I assume they’re saying.
When it comes to LLMs, it’s not just a “word generator.” It’s a system that generates natural-sounding language based on statistical probabilities and patterns. In other words: it talks. That’s all. Saying an LLM “doesn’t work” because it spits out inaccurate info is like saying a chess bot doesn’t work because it can’t play poker. No - that’s user error. They’re trying to use the tool for something it was never designed to do.


Even if someone’s inaccurately using “AI” as a synonym for LLMs, that claim would still be false - because LLMs work. You can use one right now.
One spitting out false information isn’t a sign they’re not working. That’s not what LLMs are designed for. They’re chatbots - not generally intelligent systems. They don’t think - they talk.


Being a foreigner doesn’t automatically mean something is bad - just unfamiliar.


AI is a broad category of systems, not any one thing. “AI doesn’t work” is like saying “plants taste bad”


Is the stock market crash in the room with us?


It’ll give you short response if you ask it to.


It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.
It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.
So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.


Is cruise control useless because it doesn’t drive you to the grocery store? No. It’s not supposed to. It’s designed to maintain a steady speed - not to steer.
Large Language Models, as the name suggests, are designed to generate natural-sounding language - not to reason. They’re not useless - we’re just using them off-label and then complaining when they fail at something they were never built to do.


Soviets were allied with nazi Germany untill they turned on them.


Those terms are not synonymous. LLMs are very much an AI system but AI means much more than just LLMs.


I don’t think AI means what you think it does. What you’re thinking is probably more akin to AGI.
Logic Theorist is broadly considered to be the first ever AI system. It was written by Allen Newell in 1956.


This is one of the things LLMs are actually pretty good at. Just don’t blindly trust its output.


they imply that anti-depressants aren’t useful
The title says they’re just as effective as excercise. The only way to intrepret this as saying medication isn’t useful is if you think excercise isn’t useful either.


General intelligence refers to human level intelligence where it’s not only limited to one task like playing chess or generating language. General intelligence exists - just not artificial one.


Saying that it’s good at one thing and bad at others.
But that’s exactly the difference between narrow AI and a generally intelligent one. A narrow AI can be “superhuman” at one specific task - like generating natural-sounding language - but that doesn’t automatically carry over to other tasks.
People give LLMs endless shit for getting things wrong, but they should actually get credit for how often they get it right too. That’s a pure side effect of their training - not something they were ever designed to do.
It’s like cruise control that’s also kinda decent at driving in general. You might be okay letting it take the wheel as long as you keep supervising - but never forget it’s still just cruise control, not a full autopilot.


It’s a Large Language Model designed to generate natural-sounding language based on statistical probabilities and patterns - not knowledge or understanding. It doesn’t “lie” and it doesn’t have the capability to explain itself. It just talks.
That speech being coherent is by design; the accuracy of the content is not.
This isn’t the model failing. It’s just being used for something it was never intended for.


What do you even do on Instagram for 16 hours?
I can respect that. I’ve criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don’t even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.
These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it’s been plateauing pretty hard over the past year or so.
I’d be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don’t think anyone’s secretly sitting on actual AGI, but I also don’t buy that what we have access to is the absolute best versions in existence.