A lady I can’t stand who is condescending to everyone like the worst version of a grade school teacher resigned recently so she can market her GPT “models” full time.
Last week she used “Salem” her GPT prompt BS to read through a detailed document my department put together to see what was missing. She shared her screen with obvious gpt slop and started pointing out all the things we didn’t answer. The lady on my team who put the very specific and detailed document together just started reading the answers right from the document that showed it was clearly and precisely answered. The lady I can’t stand stopped sharing her screen and quit talking.
The moral of the story, gpt did me a solid by convincing this person who’s clearly never heard of the Dunning Kruger effect that she needs to quit her well paying job and stop being a pain in my ass.
Thank you GPT!! Best thing is ever done for me.
Holy shit. I think you just found a valid use for LLM’s. OpenAI valuation intensifies
Software developer, here. (No, not a “vibe coder.” I actually know how to read and write my own code and what it does.)
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs. (No, not to write everything for me, just to do some of the more tedious tasks faster.) The IDE itself can help keep them in line, because it detects when they screw up. Which is all the time, due to their nature. Even recent and relatively “good” models like Sonnet need constant babysitting.
GPT 5 failed spectacularly. So badly, in fact, that I’m glad I only set it to analysis tasks and not to any write tasks. I will not be using it for anything else any time soon.
Yeah, LLMs are decent with coding tasks if you know what you’re doing and can properly guide it (and check it’s work!), but fuck if they don’t take a lot of effort to reign in. I will say they’re pretty damned good at debugging the shit I wrote. I’ve been working on an audit project for a few months and 4o/5 have helped me a good bit to find persistent errors in my execution logic that I just kept missing on rereads and debug runs.
But new generation is painful. I had 5 generate a new function for me yesterday to do some issues recon and report generation, and I spent 20 minutes going back and forth with it dropping fields in the output repeatedly. Even on 5, it still struggles at times to not give you the same wrong answer more than once, or just waffles between wrong answers at times.
Dude forgetting stuff has to be one the most frustrating parts of the entire process . Like forgetting a column in a database or just an entire piece of a function you just pasted in… Or trying to change things you never asked it to touch. So freaking annoying. I had standing instructions in it’s memory to not leave out pieces or modify things I didn’t ask for and will put that stuff in the prompt and it just does not care lol.
I’ve used it a lot for coding because I’m not a real programmer (more a code hacker) and need to get things done for a website, but I know just enough to know it’s really stupid sometimes lol.
Dude forgetting stuff has to be one the most frustrating parts of the entire process . Like forgetting a column in a database or just an entire piece of a function you just pasted in
It was actually worse. I was pulling data out of local logs and processing events. I asked to assess a couple columns that I was struggling to parse properly, and it got those ones in, but dropped some of my existing columns. I pointed out the error, it acknowledged the issue, then spat out code that reverted to the first output!
Though, that wasn’t nearly as bad as it telling me that a variable a couple hundred lines and multiple transformations in wasn’t being populated by an early variable, and I literally went in and just copied each declaration line and sent it back like I was smacking an intern on the nose or something…
For a bit designed to read and analyze text, it is surprisingly bad at the whole ‘reading’ aspect. But maybe that’s just how human like the intelligence is /s
Or trying to change things you never asked it to touch. So freaking annoying. I had standing instructions in it’s memory to not leave out pieces or modify things I didn’t ask for and will put that stuff in the prompt and it just does not care lol
OMFG this. I’ve had decent luck recently after setting up a project and explicitly laying out a number of global directives, because yeah, it was awful trying to figure out exactly what changed when I diff the input and output, and fucking everything is red because even the goddamned comments are changed. But even just trying to make it understand basic style requirements was a solid half hour of arguing with it (only partially because I forgot the proper names of casings) so it wouldn’t make me lint the whole goddamned script I just told it to analyze and fix one item.
Yessir I’ve basically run into all of that. It’s fucking infuriating. It really is like talking to a toddler at times. There seems to be a limit to the complexity of what it can process before it just starts messing everything up. Like once you hit its limit, it will not process the entire thing no matter how many times you fix it together like your example. You fix one problem and then it just forgets a different piece. FFFFFFFFFF.
LLMs are decent with coding tasks if you know what you’re doing
Only if the thing you are trying to do is commonly used and well documented, but in that case you could just read the documentation instead and learn a thing yourself, right?
The other day I tried to get some instructions on how to do something specific in a rather obscure and rather opaquely documented cli tool that I need for work. I couldn’t quite make sense of the documentation, and I found the program’s behavior a bit erratic, so that’s why I turned to AI. It cheerfully and confidently told me (I’m paraphrasing): oh to do “this specific thing” you have to use the
--something-specific
switch, and then it gave some command line examples using that switch that looked like they made complete sense.So I thought: oh, did I overlook that switch? Could it be that easy? So I looked in the documentation and sure enough… the AI had been bullshitting me and that switch didn’t exist.
Then there was the time when I asked it to generate an ARM template (again, poorly documented bullshit) to create some service in Azure with some specific parameters. It gave me something that looked like an ARM template, but sure as hell wasn’t a valid one. This one wasn’t completely useless though, at least I was able to cross reference with an existing template and with some trial-and-error, I was able to copy over some of the elements that I needed.
I’m no longer even confident in modern LLMs to do stuff like convert a table schema or JSON document into a POCO. I tried this the other day with a field list from a table creation script. So it had to do was reformat the fields into a dumb C# model. Inexplicably it did fine except for omitting a random field in the middle of the list. Kinda shakes your confidence in LLMs for even the most basic programming tasks.
More and more, for tasks like that I simply will not use an LLM at all. I’ll use a nice, predictable, deterministic script. Weirdly, LLMs are pretty decent at writing those.
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs.
The best use of LLM sadly is to use it on social media to spread disinformation. At that point making shit up isn’t a big but a feature.
For coding I am still not sold on it. It seems to excel on tasks that were done millions of times like programming assignments at school, example/tutorial code, interview questions.
For me while it helps in stone cases, I still have to go over the code and understand it and very often it introduces subtle bugs or I can write a more concise chose that fits my need. In those cases all the advantages it did are nullified, I suspect it might actually be slowing me down.
It feels to me that LLM is a godsend to all the coders that previously copied code from stack overflow. It greatly streamlined the process and also included all code published on GitHub.
Have you given Qwen or GLM 4.5 a shot?
Not yet. I’ll give them a shot if they promise never to say “you’re absolutely correct” or give me un-requested summaries about how awesome they are in the middle of an unfinished task.
Actually, I have to give GPT 5 credit on one thing: It’s actually sort of paying attention to the
copilot-instructions.md
file, because I put this snippet in it: “You don’t celebrate half-finished features, and your summaries of what you’ve accomplished are not only rare, they’re never more than five sentences long. You just get straight to the point.” And - surprise, surprise - it has strictly followed that instruction.Fucks up everything else, though.
ChatGPT Is Still a Bullshit Machine
Just like Sam Altman.
I will add that they aren’t even tackling basic issues like the randomness of sampling; all OpenAI models still need a highish temperature. It’s like the poster child of corporate scaling vs actually innovating.
That’s because they’re not trying to make AI, they are just programming LLMs to be bullshit confirmation bias machines. They don’t want to create intelligence, they want to create unregulated revenue streams.
I tried ChatGPT-5 for the first time this morning. I asked it to help me create an RSS list for some world news articles. For context I have never used RSS before. That was 90 minutes of my life I’ll never get back. Also, you can no longer choose any other models except for 5 and the 5-thinking model. Access to 4o is gone.
What did it do exactly? I haven’t kept up with AI because I’d rather not engage with it.
It was consistently wrong about how to go about setting up a group feed on every single website it suggested. And I started out trying to work on my iPad, and it kept telling me where to find things, and it was wrong, and how to do something, and it was wrong, and it kept telling me supposed desktop instructions, even after I told it I was on an iPad, (an iPad is somehow different it claimed). So I went to the desktop and it was exactly the same as the iPad, as o knew it would be, meaning its instructions were just wrong. When I called it out and I asked it on its final recommendation to be absolutely sure that the process it was going to tell me was correct and up-to-date, and to read the website first before it gave me the information; it lied to me and said it was and that it would check first, and then didn’t do what it said it was going to do. Plus about 70% of RSS links it gave me were bad. It was just a really frustrating experience. Essentially going from version 4o to version 5 was a step backwards in AI evolution from what I can tell things,
My kids will never be smarter than AI
Sam Altman
Always has been.
One thing I didn’t expect in AI evolution was all the “safety” features. Basically since people are too stupid to use LLMs and not fall in love or poison themselves all models have more and more guardrails making it way less useful than normal search. I think it was fairly clear from the beginning that LLMs will always be bullshit machines but I didn’t think they will be less and less useful bullshit machines.
I saw one article actually going all in on how incredible GPT-5 was.
The thing is, the biggest piece that really made the author excited was the “startup idea” and it proceeded to generate a mountain of business-speak that says nothing. He proceeds to proclaim a whole team of MBAs would take hours to produce something so magnificent. This pretty much made him just lose it, and I guess that is exactly the sort of content idiot executives slurp up.
As a friend and AI defender recently put it
This feels like an oxymoron
Nothing to see here, just another “token based LLMs can’t count letters in words” post
That’s what I thought but there’s slightly more than that.
The writer tried to trick ChatGPT 5, saying Vermont has no R in it. ChatGPT did say “wait, it does”. But then when pushed it said, “oh right there is no R in Vermont”.
I mean… the inability to know what it knows or not is a real problem for most use cases…
Yeah, the fact you can “gaslight” a chat is just as much of a symptom of a difficulty as the usual mistakes. It shows that it doesn’t deal with facts, but structurally sound content, which is correlated with facts, especially when the prompt has context/rag stuffing the prompt using more traditional approaches that actually will tend to get more factual stuff crammed in.
To all the people white knighting for the LLM, for the thousandth time, we know that it is useful, but it’s usefulness is only tenuously connected to the marketing reality. Making the mistake in counting letters is less important than the fact that it “acts” like it can when it can’t.
So is SamA.
deleted by creator
deleted by creator