The real slowdown comes after when you realize you don’t understand your own codebase because you relied too much on AI. To understand it well enough requires discipline, which in the current IT world is lacking anyway. Either you can rely entirely on AI or you need to monitor its every action, in which case you may be better off writing yourself. But this hybrid approach I don’t think will pan out particularly well.
Yeah, it’s interesting how strangely development is presented, like programming is only about writing code. They still do that when they tout ai coding capabilities.
I’m not against ai, it’s amazing how quickly you can build something. But something small and limited one person can build. The whole human experience is missing, laziness, boredom, communication and issues with communication,… to actually build a good product that’s more than a simple app.
Any new tool or technique will slow ANYONE down until you familiarize yourself and get used to it.
This article mind was say the sky is blue and grass is green, it isn’t news and it’s quite obvious it will take a few uses to get decent with it. Like any other new tool, software, etc.
deleted by creator
The article clearly mentioned they weren’t experienced with AI, that’s the new tool.
devs who were used to the tools
Not true - here’s an excerpt from the article:
including only a specialized group of people to whom these AI tools were brand new.
This is true. However, the issue is we keep oscillating between AI is useless and over hyped; and it will solve all of life’s problems and you should not call it slop out of respect. The truth is somewhere in between, but we need to fight for it to find it.
When writing code, I don’t let AI do the heavy lifting. Instead, I use it to push back the fog of war on tech I’m trying to master. At the same time, keep the dialogue to a space where I can verify what it’s giving me.
- Never ask leading questions. Every token you add to the conversation matters, so phrase your query in a way that forces the AI to connect the dots for you
- Don’t ask for deep reasoning and inference. It’s not built for this, and it will bullshit/hallucinate if you push it to do so.
- Ask for live hyperlinks so it’s easier to fact-check.
- Ask for code samples, algorithms, or snippets to do discrete tasks that you can easily follow.
- Ask for A/B comparisons between one stack you know by heart, and the other you’re exploring.
- It will screw this up, eventually. Report hallucinations back to the conversation.
About 20% of the time, it’ll suggest things that are entirely plausible and probably should exist, but don’t. Some platforms and APIs really do have barn-door-sized holes in them and it’s staggering how rapidly AI reports a false positive in these spaces. It’s almost as if the whole ML training stratagem assumes a kind of uniformity across the training set, on all axes, that leads to this flavor of hallucination. In any event, it’s been helpful to know this is where it’s most likely to trip up.
Edit: an example of one such API hole is when I asked ChatGPT for information about doing specific things in Datastar. This is kind of a curveball since there’s not a huge amount online about it. It first hallucinated an attribute namespace prefix of
data-star-which is incorrect (it usesdata-instead). It also dreamed up a JavaScript-callable API parked on a non-existentDatastar.object. Both of those concepts conform strongly to the broader world of browser-extending APIs, would be incredibly useful, and are things you might expect to be there in the first place.My problem with this, if I understand correctly, is I can usually do all of this faster without having to lead a LLM around by the nose and try to coerce it into being helpful.
That said, search engines do suck ass these days (thanks LLMs)
That’s been my biggest problem with the current state of affairs. It’s now easier to research newer tech through an LLM than it is to play search-result-wack-a-mole, on the off chance that what you need is on a forum that’s not Discord. At least an AI can mostly make sense of vendor docs and extrapolate a bit from there. That said, I don’t like it.
People will literally do anything to avoid rtfm
I like your strategy. I use a system prompt that forces it to ask a question if there are options or if it has to make assumptions. Controlling context is key. It will get lost if it has too much, so I start a new chat frequently. I also will do the same prompts on two models from different providers at the same time and cross reference the idiots to see if they are lying to me.
I use a system prompt that forces it to ask a question if there are options or if it has to make assumptions
I’m kind of amazed that even works. I’ll have to try that. Then again, I’ve asked ChatGPT to “respond to all prompts like a Magic 8-ball” and it knocked it out of the park.
so I start a new chat frequently.
I do this as well, and totally forgot to mention it. Yes, I keep the context small and fresh so that prior conversations (and hallucinations) can’t poison new dialogues.
I also will do the same prompts on two models from different providers at the same time and cross reference the idiots to see if they are lying to me.
Oooh… straight to my toolbox with that one. Cheers.
I forgot another key. The code snippets they give you are bloated and usually do unnecessary things. You are actually going to have to think to pull out the needed line(s) and clean it up. I never copy paste.
I find it best to get the agent into a loop where it can self-verify. Give it a clear set of constraints and requirements, give it the context it needs to understand the space, give it a way to verify that it’s completed its task successfully, and let it go off. Agents may stumble around a bit but as long as you’ve made the task manageable it’ll self correct and get there.
And this gets worse over time because you still have to maintain it.
And as the cherry on top - https://www.techradar.com/pro/nearly-half-of-all-code-generated-by-ai-found-to-contain-security-flaws-even-big-llms-affected
Not surprised.
In my last job, my boss used more and more AI. As a senior dev, I was very used to his coding patterns. I knew the code that he wrote and could generally follow what he made. The more he used AI? The less understandable, confusing and buggy the code became.
Eventually, the CEO of the company abused the “gains” of the AI “productivity” to push for more features with tighter deadlines. This meant the technical debt kept growing, and I got assigned to fixing the messes the AI was shitting all over the code base with.
In the end? We had several critical security vulnerabilities and a code base that even I couldn’t understand. It was dogshit. AI will only ever be used to “increase productivity” and profit while ignoring the chilling effects: lower quality code, buggy software and dogshit working conditions.
Enduring 3 months of this severely burnt me out, I had to quit. The rabid profit incentive needs to go to fucking hell. God I despise of tech bros.
Here’s the full paper for the study this article is about: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (PDF).
Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%–AI tooling slowed developers down.
The gap and miss is insane.
Thank you, the article is shit
No shit, Sherlock. Except that “AI” is a wrongly attributed marketing buzzword.
Someone on Mastodon was saying that whether you consider AI coding an advantage completely depends on whether you think of prompting the AI and verifying its output as “work.” If that’s work to you, the AI offers no benefit. If it’s not, then you may think you’ve freed up a bunch of time and energy.
The problem for me, then, is that I enjoy writing code. I do not enjoy telling other people what to do or reviewing their code. So AI is a valueless proposition to me because I like my job and am good at it.
I got an email couple of weeks ago with invitation to some paid study about AI. They were looking for programmers that would solve some tasks with and with AI help. I didn’t have time or felt like participating but if I did I would 100% work slower on task with AI just to help derail the pro-AI narrative. It’s not in my interest to help promote it. Just saying…
People assumed X, but in one experiment the result was Y.
And in his many experiments the result was in fact X, if it was just 1 on which it was Y?
I don’t actually disagree with the article, I’m just pointing out the title is meaningless.





