You’re correct in a technical sense but incorrect in a social sense. In 2025, “AI” in the common vernacular means LLMs. You can huff and puff about it, and about how there are plenty of non-LLM AIs out there. But you might as well complain that people mean silicon-based Turing-complete machines when they refer to a “computer,” even though technically a computer can mean many other things. You might as well be complaining about how a computer could refer to someone that does calculations by hand for a living. Or you could refer to something like Babbage’s difference engine as a computer. There are many things that can technically fall under the category of “computer.” But you know damn well what people are saying when they describe a computer. And hell, in common vernacular, a smart phone isn’t even a “computer,” even though it literally is just a computer. Words have both technical and vernacular meanings.
In 2025, in the language real speak in the real world, “AI” is a synonym for “LLM.”
It’s a failure of our education systems that people don’t know what a computer is, something they interact with every day.
While the Sapir-Whorf hypothesis might be bunk, I’m convinced that if you go up one level in language structure there is a version of it that is true. That is treating words as if they don’t need a consistent definition melts your brain. For the same reason that explaining a problem to someone else helps you solve it, doing the opposite and untethering your thoughts from self-consistant explanations stops you from explaining them even to yourself, and therefore harms your ability to think.
I wonder if this plays some part in how ChatGPT use apparently makes people dumber, that it could be not only because they become accustomed to not having to think, but because they become conditioned to accept text that is essentially void of consistent meaning.
Also, some things are called AI that aren’t. People are freaking out as soon as the term is mentioned without checking if it’s actually some sort of model or if it’s just a basic algorithm with a buzzword tossed on.
“AI” in videogames is basically never powered by large models like LLMs or Stable Diffusion or others. The fact you compare them only demonstrates how fucking little you actually know about this topic you are BLINDLY defending.
“AI” is not just LLMs or diffusion models, and that’s what I think OPs is about, like, do you also hate Stockfish? Or enemies in a videogame?
You’re correct in a technical sense but incorrect in a social sense. In 2025, “AI” in the common vernacular means LLMs. You can huff and puff about it, and about how there are plenty of non-LLM AIs out there. But you might as well complain that people mean silicon-based Turing-complete machines when they refer to a “computer,” even though technically a computer can mean many other things. You might as well be complaining about how a computer could refer to someone that does calculations by hand for a living. Or you could refer to something like Babbage’s difference engine as a computer. There are many things that can technically fall under the category of “computer.” But you know damn well what people are saying when they describe a computer. And hell, in common vernacular, a smart phone isn’t even a “computer,” even though it literally is just a computer. Words have both technical and vernacular meanings.
In 2025, in the language real speak in the real world, “AI” is a synonym for “LLM.”
It’s a failure of our education systems that people don’t know what a computer is, something they interact with every day.
While the Sapir-Whorf hypothesis might be bunk, I’m convinced that if you go up one level in language structure there is a version of it that is true. That is treating words as if they don’t need a consistent definition melts your brain. For the same reason that explaining a problem to someone else helps you solve it, doing the opposite and untethering your thoughts from self-consistant explanations stops you from explaining them even to yourself, and therefore harms your ability to think.
I wonder if this plays some part in how ChatGPT use apparently makes people dumber, that it could be not only because they become accustomed to not having to think, but because they become conditioned to accept text that is essentially void of consistent meaning.
That’s a great point and you are right, most people don’t know/don’t care about the technical differences
I hate stockfish because it keeps beating me >:(
Also, some things are called AI that aren’t. People are freaking out as soon as the term is mentioned without checking if it’s actually some sort of model or if it’s just a basic algorithm with a buzzword tossed on.
Exactly
“AI” in videogames is basically never powered by large models like LLMs or Stable Diffusion or others. The fact you compare them only demonstrates how fucking little you actually know about this topic you are BLINDLY defending.