You know exactly what we’re talking about when we look at this article and say “AI doesn’t work.” If you want to feign outrage, save it for the tech companies that muddy the waters.
Even if someone’s inaccurately using “AI” as a synonym for LLMs, that claim would still be false - because LLMs work. You can use one right now.
One spitting out false information isn’t a sign they’re not working. That’s not what LLMs are designed for. They’re chatbots - not generally intelligent systems. They don’t think - they talk.
If you can understand the sentence “AI doesn’t work” is about LLMs, surely you can also understand that not working is synonymous for returning incorrect outputs.
I have literally no idea what else you’d be arguing. Its ability to generate words? Everybody knows it can do that
The vast majority of people aren’t educated on the correct terminology here. They don’t know the difference between AI, LLM, AGI, ASI, etc. That makes it near impossible to have real discussions about AI - everyone’s constantly talking past each other and using the same words to mean completely different things.
My original comment wasn’t even challenging their claim that “AI doesn’t work.” I was just pointing out that AI and LLM aren’t synonymous. It’s my one-man fight against sloppy, imprecise use of language. I’d rather engage with what people are actually saying, not with what I assume they’re saying.
When it comes to LLMs, it’s not just a “word generator.” It’s a system that generates natural-sounding language based on statistical probabilities and patterns. In other words: it talks. That’s all. Saying an LLM “doesn’t work” because it spits out inaccurate info is like saying a chess bot doesn’t work because it can’t play poker. No - that’s user error. They’re trying to use the tool for something it was never designed to do.
To belabor the chess analogy: I would say a chessbot didn’t work if it randomly caused pieces to appear. Or if it made exceedingly lousy moves. You’d apparently say it was working because it technically changed the board.
Literally nobody is saying the token predictor isn’t predicting token. It’s just predicting wrong token, which normal people call “not working,” while tech evangelists prefer to call it “hallucination” or “misalignment” depending on the narrative they’re aiming for.
The goal of the token predictor is to produce coherent language - not factual information. If you can understand what it’s saying, it’s working - even if the content of what it says is factually inaccurate.
Accuracy is the only thing people want, and the only thing AI companies talk about. The text has already legible, and it’s been that way for years. I think you’re alone on your quest to lower the bar for the word “works”
You know exactly what we’re talking about when we look at this article and say “AI doesn’t work.” If you want to feign outrage, save it for the tech companies that muddy the waters.
Even if someone’s inaccurately using “AI” as a synonym for LLMs, that claim would still be false - because LLMs work. You can use one right now.
One spitting out false information isn’t a sign they’re not working. That’s not what LLMs are designed for. They’re chatbots - not generally intelligent systems. They don’t think - they talk.
If you can understand the sentence “AI doesn’t work” is about LLMs, surely you can also understand that not working is synonymous for returning incorrect outputs.
I have literally no idea what else you’d be arguing. Its ability to generate words? Everybody knows it can do that
The vast majority of people aren’t educated on the correct terminology here. They don’t know the difference between AI, LLM, AGI, ASI, etc. That makes it near impossible to have real discussions about AI - everyone’s constantly talking past each other and using the same words to mean completely different things.
My original comment wasn’t even challenging their claim that “AI doesn’t work.” I was just pointing out that AI and LLM aren’t synonymous. It’s my one-man fight against sloppy, imprecise use of language. I’d rather engage with what people are actually saying, not with what I assume they’re saying.
When it comes to LLMs, it’s not just a “word generator.” It’s a system that generates natural-sounding language based on statistical probabilities and patterns. In other words: it talks. That’s all. Saying an LLM “doesn’t work” because it spits out inaccurate info is like saying a chess bot doesn’t work because it can’t play poker. No - that’s user error. They’re trying to use the tool for something it was never designed to do.
To belabor the chess analogy: I would say a chessbot didn’t work if it randomly caused pieces to appear. Or if it made exceedingly lousy moves. You’d apparently say it was working because it technically changed the board.
Literally nobody is saying the token predictor isn’t predicting token. It’s just predicting wrong token, which normal people call “not working,” while tech evangelists prefer to call it “hallucination” or “misalignment” depending on the narrative they’re aiming for.
The goal of the token predictor is to produce coherent language - not factual information. If you can understand what it’s saying, it’s working - even if the content of what it says is factually inaccurate.
Accuracy is the only thing people want, and the only thing AI companies talk about. The text has already legible, and it’s been that way for years. I think you’re alone on your quest to lower the bar for the word “works”