The company’s shoddy opsec doesn’t directly equate to the model’s cabapilities. I am not one to believe anyone’s hype, but I am not one to believe the AI anti-hype that goes on throughout Lemmy. A year ago, according to Lemmy, LLMs could never produce working code at scale. 6 months ago, according to Lemmy, LLMs could never produce working code that was secure enough to use in production. Now, Lemmy believes LLM can’t be disruptive to cybersecurity as a whole.
In 6 months I wonder what Lemmy will claim LLMs aren’t capable of.
It’s not evidence, but anecdotally i have two clients who have been working fully agentic for a good 6 months and they’re smashing it. Even looking at it critically they haven’t been able to find any obvious negative impact on code quality, product stability or performance or security.
I think the secret sauce is that they weren’t born with AI, they just integrated it into technical cultures that were already solid. In this context it does speed things up a bunch. Not a 100x multiplier or whatever dumb stuff they’re pushing on Twitter, but you do get high velocity without burning out your team or sacrificing quality.
And those 2 companies i just happen to know but they’re nothing special. You take any boring software company from Paris or Berlin and i’m pretty sure you’ll get the evidence you seek.
The company’s shoddy opsec doesn’t directly equate to the model’s cabapilities. I am not one to believe anyone’s hype, but I am not one to believe the AI anti-hype that goes on throughout Lemmy. A year ago, according to Lemmy, LLMs could never produce working code at scale. 6 months ago, according to Lemmy, LLMs could never produce working code that was secure enough to use in production. Now, Lemmy believes LLM can’t be disruptive to cybersecurity as a whole.
In 6 months I wonder what Lemmy will claim LLMs aren’t capable of.
I still have not seen evidence that LLMs can make it easier for engineers to produce working code at scale.
It’s not evidence, but anecdotally i have two clients who have been working fully agentic for a good 6 months and they’re smashing it. Even looking at it critically they haven’t been able to find any obvious negative impact on code quality, product stability or performance or security.
I think the secret sauce is that they weren’t born with AI, they just integrated it into technical cultures that were already solid. In this context it does speed things up a bunch. Not a 100x multiplier or whatever dumb stuff they’re pushing on Twitter, but you do get high velocity without burning out your team or sacrificing quality.
And those 2 companies i just happen to know but they’re nothing special. You take any boring software company from Paris or Berlin and i’m pretty sure you’ll get the evidence you seek.
Yeah this is very linear. Just because something sucks in someways doesn’t make in wholly incapable of other things.
BUT THESE ARE THEFT BOTS !!!111!!!111 THeY aRe thE ReASon NOboDy waNtS tO pAy FoR mY FuRry POrN ART !1!!!1!11!
Straw men