To those who have played around with LLM code generation more than me, how are they at debugging?
I’m thinking of Kernighan’s Law: “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” If vibe coding reduces the complexity of writing code by 10x, but debugging remains just as difficult as before, then Kernighan’s Law needs to be updated to say debugging is 20x as hard as vibe coding. Vibe coders have no hope of bridging that gap.
They’re not good at debugging. The article is pretty spot on, IMO - they’re great at doing the work; but you are still the brain. You’re still deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore. Similar for debugging - this is not an exercise at the lowest level, and needs you to run it.
The company I work for has recently mandated that we must start using AI tools in our workflow and is tracking our usage, so I’ve been experimenting with it a lot lately.
In my experience, it’s worse than useless when it comes to debugging code. The class of errors that it can solve is generally simple stuff like typos and syntax errors — the sort of thing that a human would solve in 30 seconds by looking at a stack trace. The much more important class of problem, errors in the business logic, it really really sucks at solving.
For those problems, it very confidently identifies the wrong answer about 95% of the time. And if you’re a dev who’s desperate enough to ask AI for help debugging something, you probably don’t know what’s wrong either, so it won’t be immediately clear if the AI just gave you garbage or if its suggestion has any real merit. So you go check and manually confirm that the LLM is full of shit which costs you time… then you go back to the LLM with more context and ask it to try again. It’s second suggestion will sound even more confident than the first, (“Aha! I see the real cause of the issue now!”) but it will still be nonsense. You go waste more time to rule out the second suggestion, then go back to the AI to scold it for being wrong again.
Rinse and repeat this cycle enough times until your manager is happy you’ve hit the desired usage metrics, then go open your debugging tool of choice and do the actual work.
As it seems to be the case in all of these situations, AI fails hard at tasks when compared to tools specifically designed for that task. I use Ruff in all my Python projects because it formats my code and finds (and often fixes) the kind of low complexity/high probability problems that are likely to pop up as a result of human imperfection. It does it with great accuracy, incredible speed, using very little computing resources, and provides levels of safety in automating fixes. I can run it as an automation step when someone proposes code changes, adding all of 3 or 4 seconds to the runtime. I can run it on my local machine to instantly resolve my ID10T errors. If AI can’t solve these problems as quickly, and if it can’t solve anything more complicated reliably, I don’t understand why it would be a tool I would use.
It cannot step through code right now, so true debugging is not something you use it for. Most of the time the llm will take the junior engineer approach of “guess and check” unless you explicitly give it better guidance.
My process is generally to start with unit tests and type definitions, then a large multipage prompt for every segment of the app the llm will be tasked with. Then I’ll make a snapshot of the code, give the tool access to the markdown prompt, and validate its work. When there are failures and the project has extensive unit tests it generally follows the same pattern of “I see that this failure should be added to the unit tests” which it does and then re-executes them during iterative development.
If tests are not available or if it is not something directly accessible to the tool then it will generally rely on logs either directly generated or provided by the user.
My role these days is to provide long well thought out prompts, verify the integrity of the code after every commit, and generally just kind of treat the llm as a reckless junior dev. Sometimes junior devs can surprise you, like yesterday I was very surprised by a one shot result: asking for a mobile rn app for taking my rambling voice recordings and summarize them into prompts, it was immediately remarkably successful and now I’ve been walking around mic’d up to generate prompts.
Working just fine. It one shot a kodi tv channel addon for me last week end. Used it to integrate kofax into docusign. Building 2 blazor apps one new one an upgrade. Used it to create a stack of mc servers for the kids with a dashboard of statuses and control switches. My son is working on his own mc mod with it. Use it almost daily for random file organization and management scripts. Using it to clean uo my media library meta data. Anytime i have to do something to more than 5 or so files i pull it up and ask for a script.
Its a tool like any other. There will be people who adapt and people who fail to. Just like we had with computers the internet. It zeems to be long forgotten now but literally ALL of these anti ai arguments were made against computers and the internet 30_50 years ago. Very similar ones were made when books and writing became common place as well.
“Some random people were wrong about something in the past so nobody is allowed to speculate that any technology isn’t as revolutionary as it’s hyped to be ever again” is not a useful or compelling argument.
How are they at debugging? In a silo, they’re shit.
I’ve been using one LLM to debug the other this past week for a personal project, and it can be a bit tedious sometimes, but it eventually does a decent enough job. I’m pretty much vibe coding things that are a bit out of my immediate knowledge and skill set, but I know how they’re supposed to work. For example, I’ve got some python scripts using rekognition to scan photos for porn or other explicit stuff before they get sent to an s3 bucket. After that happens, there’s now a dashboard that’s going to give me results on how many images were scanned and then marked as either acceptable or flagged as inappropriate. After a threshold of too many inappropriate images being sent in, it’ll shadowban them from sending any more dick pics in.
For someone that’s never taken a coding course, I’m relatively happy with the results I’m getting so far. Granted, this may be small potatoes for someone with an actual development background; but as someone that’s been working adjacent to those folks for several years, I’m happy with the output.
I’ve used AI by just pasting code, then asking if there’s anything wrong with it. It would find things wrong with it, but would also say some things were wrong when it was actually fine.
I’ve used it in an agentic-AI (Cursor), and it’s not good at debugging any slightly-complex code. It would often get “stuck” on errors that were obvious to me, but making wrong, sometimes nonsensical changes.
I am working at a big AI company on llm generating code for automation. I’ve had cursor solve a bug that was occuring in prod after prompting and asking questions of the responses. It took a few rounds but it found a really obscure interaction with the app and the host, and it thanked me for the insight. 😀. I deployed the fix and it worked.
The problem I have is I member it solving this bug, and I remember being impressed, but I don’t remember the bug. I took a screenshot of it, but currently don’t have access to those. I am disconnected from the code that the llm has generated, but I am very aware of how the app works and what it should do very intently because I had to write requirements and design doc.
To those who have played around with LLM code generation more than me, how are they at debugging?
I’m thinking of Kernighan’s Law: “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” If vibe coding reduces the complexity of writing code by 10x, but debugging remains just as difficult as before, then Kernighan’s Law needs to be updated to say debugging is 20x as hard as vibe coding. Vibe coders have no hope of bridging that gap.
They’re not good at debugging. The article is pretty spot on, IMO - they’re great at doing the work; but you are still the brain. You’re still deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore. Similar for debugging - this is not an exercise at the lowest level, and needs you to run it.
The company I work for has recently mandated that we must start using AI tools in our workflow and is tracking our usage, so I’ve been experimenting with it a lot lately.
In my experience, it’s worse than useless when it comes to debugging code. The class of errors that it can solve is generally simple stuff like typos and syntax errors — the sort of thing that a human would solve in 30 seconds by looking at a stack trace. The much more important class of problem, errors in the business logic, it really really sucks at solving.
For those problems, it very confidently identifies the wrong answer about 95% of the time. And if you’re a dev who’s desperate enough to ask AI for help debugging something, you probably don’t know what’s wrong either, so it won’t be immediately clear if the AI just gave you garbage or if its suggestion has any real merit. So you go check and manually confirm that the LLM is full of shit which costs you time… then you go back to the LLM with more context and ask it to try again. It’s second suggestion will sound even more confident than the first, (“Aha! I see the real cause of the issue now!”) but it will still be nonsense. You go waste more time to rule out the second suggestion, then go back to the AI to scold it for being wrong again.
Rinse and repeat this cycle enough times until your manager is happy you’ve hit the desired usage metrics, then go open your debugging tool of choice and do the actual work.
As it seems to be the case in all of these situations, AI fails hard at tasks when compared to tools specifically designed for that task. I use Ruff in all my Python projects because it formats my code and finds (and often fixes) the kind of low complexity/high probability problems that are likely to pop up as a result of human imperfection. It does it with great accuracy, incredible speed, using very little computing resources, and provides levels of safety in automating fixes. I can run it as an automation step when someone proposes code changes, adding all of 3 or 4 seconds to the runtime. I can run it on my local machine to instantly resolve my ID10T errors. If AI can’t solve these problems as quickly, and if it can’t solve anything more complicated reliably, I don’t understand why it would be a tool I would use.
I use it extensively daily.
It cannot step through code right now, so true debugging is not something you use it for. Most of the time the llm will take the junior engineer approach of “guess and check” unless you explicitly give it better guidance.
My process is generally to start with unit tests and type definitions, then a large multipage prompt for every segment of the app the llm will be tasked with. Then I’ll make a snapshot of the code, give the tool access to the markdown prompt, and validate its work. When there are failures and the project has extensive unit tests it generally follows the same pattern of “I see that this failure should be added to the unit tests” which it does and then re-executes them during iterative development.
If tests are not available or if it is not something directly accessible to the tool then it will generally rely on logs either directly generated or provided by the user.
My role these days is to provide long well thought out prompts, verify the integrity of the code after every commit, and generally just kind of treat the llm as a reckless junior dev. Sometimes junior devs can surprise you, like yesterday I was very surprised by a one shot result: asking for a mobile rn app for taking my rambling voice recordings and summarize them into prompts, it was immediately remarkably successful and now I’ve been walking around mic’d up to generate prompts.
Working just fine. It one shot a kodi tv channel addon for me last week end. Used it to integrate kofax into docusign. Building 2 blazor apps one new one an upgrade. Used it to create a stack of mc servers for the kids with a dashboard of statuses and control switches. My son is working on his own mc mod with it. Use it almost daily for random file organization and management scripts. Using it to clean uo my media library meta data. Anytime i have to do something to more than 5 or so files i pull it up and ask for a script.
Its a tool like any other. There will be people who adapt and people who fail to. Just like we had with computers the internet. It zeems to be long forgotten now but literally ALL of these anti ai arguments were made against computers and the internet 30_50 years ago. Very similar ones were made when books and writing became common place as well.
“Some random people were wrong about something in the past so nobody is allowed to speculate that any technology isn’t as revolutionary as it’s hyped to be ever again” is not a useful or compelling argument.
How are they at debugging? In a silo, they’re shit.
I’ve been using one LLM to debug the other this past week for a personal project, and it can be a bit tedious sometimes, but it eventually does a decent enough job. I’m pretty much vibe coding things that are a bit out of my immediate knowledge and skill set, but I know how they’re supposed to work. For example, I’ve got some python scripts using rekognition to scan photos for porn or other explicit stuff before they get sent to an s3 bucket. After that happens, there’s now a dashboard that’s going to give me results on how many images were scanned and then marked as either acceptable or flagged as inappropriate. After a threshold of too many inappropriate images being sent in, it’ll shadowban them from sending any more dick pics in.
For someone that’s never taken a coding course, I’m relatively happy with the results I’m getting so far. Granted, this may be small potatoes for someone with an actual development background; but as someone that’s been working adjacent to those folks for several years, I’m happy with the output.
I’ve used AI by just pasting code, then asking if there’s anything wrong with it. It would find things wrong with it, but would also say some things were wrong when it was actually fine.
I’ve used it in an agentic-AI (Cursor), and it’s not good at debugging any slightly-complex code. It would often get “stuck” on errors that were obvious to me, but making wrong, sometimes nonsensical changes.
I am working at a big AI company on llm generating code for automation. I’ve had cursor solve a bug that was occuring in prod after prompting and asking questions of the responses. It took a few rounds but it found a really obscure interaction with the app and the host, and it thanked me for the insight. 😀. I deployed the fix and it worked.
The problem I have is I member it solving this bug, and I remember being impressed, but I don’t remember the bug. I took a screenshot of it, but currently don’t have access to those. I am disconnected from the code that the llm has generated, but I am very aware of how the app works and what it should do very intently because I had to write requirements and design doc.