Vibe coding is a black hole. I’ve had some colleagues try and pass stuff off.
What I’m learning about what matters is that the code itself is secondary to the understanding you develop by creating the code. You don’t create the code? You don’t develop the understanding. Without the understanding, there is nothing.
Yes. And using the LLM to generate then developing the requisite understanding and making it maintainable is slower than just writing it in the first place. And that effect compounds with repetition.
TheRegister had an article, a year or 2 ago, about using AI in the opposite way: instead of creating the code, someone was using it to discover security-problems in it, & they said it was really useful for that, & most of its identified things, including some codebase which was sending private information off to some internet-server, which really are problems.
I wonder if using LLM’s as editors, instead of writers, would be better-use for the things?
_ /\ _
A second pair of eyes has always been an acceptable way to use this imo, but it shouldnt be primary or only
They are pretty good at summarisation. If I want to catch up with a long review thread on a patch series I’ve just started looking at I occasionally ask Gemini to outline the development so far and the remaining issues.
How AI is killing everything.
Which is really it’s purpose, as far as i can see
LLMs definitely kills the trust in open source software, because now everything can be a vibe-coded mess and it’s sometimes hard to check.
LLMs definitely kills the trust in
open sourcesoftware, because now everything can be a vibe-coded mess and it’s sometimes hard to check.Might make open source more trustworthy, It can’t be any harder to check than closed source.
A week or two back there was a post on Reddit where someone was advertising a project they’d put up on GitHub, and when I went to look at it I didn’t find any documentation explaining how it actually worked - just how to install it and run it.
So I gave Gemini the URL of the repository and asked it to generate a “Deep Research” report on how it worked. Got a very extensive and detailed breakdown, including some positives and negatives that weren’t mentioned in the existing readme.
I don’t know yet how good Gemini about it,but I think https://deepwiki.com/ this tool will overkill anything for now
I don’t trust proprietary software anyway.
yeah it’s to the point now where if I see emojis in the readme.md on the repo I just don’t even bother.
Check out this one I came across earlier - https://github.com/Jtensetti/fediverse-career-nexus/blob/main/README.md
It’s a federated LinkedIn. ofc it’s vibe coded.
Man… of all the vibe coding tools, Lovable has gotta be one of the most useless, too.
I work with people (all middle managers) who love Loveable because they can type a two sentence description of an app and it will immediately vomit something into existence. But the code it generates is an absolute disaster and the UIs it designs (which is supposed to be its main draw) is some of the most generic crap I’ve ever seen.
0/10, do not recommend.
well to be fair you don’t even need to look at the md since right at the top it says it’s built with loveable.
or anywhere. Job descriptions for example.
Got a job application this with a one line cover letter “Iam interested to work with u are company” it was kinda refreshing to see that instead of a whole page of slop, like most of them are these days.
ttbomk, emojis are legal function-names in both Swift & Julia…
The Swift example was damned incomprehensible, & … well, it was Apple stuff, so making it look idiotic might have been some kind of cultural-exclusivity intention…
The Julia stuff, though, means that you can use Greek symbols, etc, for functions, & get things looking more like what they should…
Also, I think emojis are actually better than my all-text style, for communicating intonation/emotion ( I’m old: learned last century ), & maybe us old geezers ought to adapt a bit, to such things…
That does NOT mean that cartoon “code” is good-enough, whether it’s cartoonish in plaintext or in emojis, though…
I’m just trying to keep the cultural-prejudice & the code-quality being distinct-categories of judgement, you know?
( & cultural-prejudice is an actual thing, though it’s usually called “religious wars”, isn’t it, in geekdom? )
_ /\ _
Interesting. I thought this will be another post about slop PRs and bug reports but no, it’s about open source project not being promoted by AI and missing on adoption and revenue opportunities.
So I think we definitely see (and will see more) ‘templatization’ of software development. Some ways of writing apps that are easy to understand for AI and are promoted by it will see wider and wider adoption. Not just tools and libraries but also folder structures, design patterns and so on. I’m not sure how bad this will be long term. Maybe it will just stabilize tooling? Do we really need new React state management library every 6 months?
Hard to tell how will this affect the development of proper tools (not vibe coded ones). Commercial tools struggling to get traction will definitely suffer but most of the libraries I use are hobby projects. I still see good tools with good documentation getting enough attention to grow, even fairly obscure ones. Then again, those tools often struggle with getting enough contributors… Are we going to see a split between vibe coded template apps for junior devs and proper tools for professionals? Will EU step in and found the core projects? I still see a way forward so I’m fairly optimistic but it’s really hard to predict what will happen in a couple of years.
I am building a commercial application in my free time and I can definitely see evidence of this templatization. There are things that are very common in C# developer’s implementations which I deliberately don’t want to do. The AI will do it with reckless abandon. I can tell it not to, but it sneaks back in.
OSS library funding has been a huge issue in general. I really think the companies that have trillion dollar market caps can fund the development of top libraries but they just don’t.
Microslop played the long game when they bought github
Eh. I never considered myself some hard-core old professional, but:
The LLM will not interact with the developers of a library or tool, nor submit usable bug reports, or be aware of any potential issues no matter how well-documented
If an LLM introduces a dependency, I will sure as hell go see it myself. Enough people do not do that for this to become a problem?
There’s a term called “dependency hell”. Sure, this one dependency is fine, but it depends on 3 other libraries, those 3 depend on a sum of 7 others, etc…
It’s exacerbated by “oh this library is updated for no reason than its version is newer so we need to force that bleeding edge on any ecosystem we’re in” thinking.
We’ve absolutely lost the careful, measured long-term release and maintenance cadence that we built the Internet on.
Compare Systemd.
The worst dependency hell is when a library has a strict version dependency, and another library uses that same dependency. When the second library updates their minimum version of the dependency to one that is higher than the exact version needed for the first, THAT’S dependency hell.
This wouldn’t be a problem if libraries didn’t frequently make breaking changes to their api.
“Move fast and break things” is for startups with no userbase, not libraries with millions of users.
There are times when things need to be broken. But I also definitely understand your angle.
Nah, dependency hell is when two things you want to use depend on same thing, but different versions. The depth of dependencies needed to make “this one thing” work may or may not be a problem
deleted by creator
The killing part is not necessarily people vibe coding programs into OSS projects, but even if the OSS itself is not vibe coded, people using AI to integrate with it will result in lower engagement and thus killing the ecosystem:
Together, these patterns suggest that AI mediation can divert interaction away from the surfaces where OSS projects monetize and recruit contributors.
From Section 2.3 of the reported paper.
This isn’t the problem with the AI, it’s the problem with the user. If you don’t know enough to select the library and make the AI use it, maybe you were never gonna finish the project without AI anyway.
I smell clickbait














