A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.
From his perspective, he’s investing his free time and likely money into a project for people that are 99% of the time just leechers, as in they never contribute back and only complain.
Now he has a tool that he feels helps him deal with all that FREE labor is doing for everyone, and the very same people now want to tell him how to do his FREE labor he does for them.
I completely understand being pissed off by that.
I mean, a reasonable person would choose to stop rather than becoming an unethical egotistical fuckwit…
A reasonable person would have forked the repo and maintained the project themselves, or used something else. I’m also deathly allergic to LLM code, but I don’t come into someone else’s free project and tell them how they should live their life.
But I agree that it was bad style to remove the co-author attribute. He should have just said “yeah, I slop, so what?”
FOSS projects are built on trust. The developer removing the co-author attribute due to backlash followed by seemingly taunting people by telling them good luck to identify which is LLM code and which is human code is just plain bad behavior.
Own what you do. Be transparent with the community. The backlash isn’t going to kill you. But you dig yourself a deeper grave by openly admitting to obfuscate the development process of a FOSS project.
My personal issue is his choice of the model used. He’s chosen Anthropic which is complicit in a war, whose AI is being used by the military to further military interests. Out of many more ethical models out there, why go with that one specifically?
He’s chosen Anthropic which is complicit in a war, whose AI is being used by the military to further military interests. Out of many more ethical models out there, why go with that one specifically?
In case this isn’t a rhetorical question, Claude is considered to be leading the pack for developer functionality. I can’t comment on the overall decision process, but it’s clear that lots of people a) don’t think about ethical concerns b) don’t prioritize them in decisions or c) align.
All we can really do is ask that people consider these things or explain their process so we can make informed decisions.
fwiw it’s worth I agree with your general points.
This feels like this is an unreasonable take… Is it really egotistical?
From what I’m reading it definitely feels like an attempt to highroad the critics by claiming personal issues, while neatly skating past any of the actual concerns raised. That does seem quite self absorbed, if not precisely egotistical.
I think I would still weigh that against the work he put out. I have used that work alot so, not to completely ignore if some behavior is bad, I still take it into account.
So he is no longer maintaining it and Claude is. And what bullahit choose a company that doesn’t work with the military. Does he know what the military is using eight now at this very instance for AI.
I really hate this new trend of FOSS developers being attacked and harassed for using AI. You might not like if they are using AI. Or you might not like AI at all, but there’s no reason to harass people who are providing you free software. Let them develop it like they want. If you don’t like that they used AI, use another software. Or fork the software before they started using AI. But attacking people like that is not okay on so many levels. It’s not okay to attack people for the software they are using. It’s not okay to attack developers providing a free service and it’s not okay to attack people at all.
I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.
I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.
Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard
It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.
Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.
Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…
Vaccines are misinformation? What.
Not even free, just cheaper than an actual employee for now, but greed is inevitable and AI is computationally expensive, it’s only a matter of time before these AI companies start cranking up the prices.
You might genuinely be using it wrong.
At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.
Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).
Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.
Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).
Agreed, I don’t understand people not even giving it a chance. They try it for five minutes, it doesn’t do exactly what they want, they give up on it, and shout how shit it is.
Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.
It’s like handing your 90-year-old grandpa the Internet, and they don’t know what the fuck to do with it. It’s so infuriating.
Probably because, like your 90-year-old grandpa with the Internet, you have to know how to use the search engine. You have to know how to communicate ideas to an LLM, in detail, with fucking context, not just “me needs problem solvey, go do fix thing!”
It’s not really that simple. Yes, it’s a great tool when it works, but in the end it boils down to being a text prediction machine.
So a nice helper to throw shit at, but I trust the output as much as a random Stackoverflow reply with no votes :)
but in the end it boils down to being a text prediction machine.
And we’re barely smarter than a bunch of monkeys throw piles of shit at each other. Being reductive about its origins doesn’t really explain anything.
I trust the output as much as a random Stackoverflow reply with no votes :)
Yeah, but that’s why there’s unit tests. Let it run its own tests and solve its own bugs. How many mistakes have you or I made because we hate making unit tests? At least the LLM has no problems writing the tests, after you know it works.
I’ve had better luck with using it in a TDD style. “Write a test for this issue, watch it fail, then make it pass.”
I feel like there needs to be a dedicated post (and I don’t want to write it, but maybe I eventually will) that outlines what a model really is. It is not just a statistical text prediction machine unless you are being so loose with the definition of “statistical” that it doesn’t even mean anything anymore.
A decent example of a statistical text prediction machine is the middle word suggested by your phone when you’re using the keyboard. An LLM is not that.
In the most general terms, this kind of language model tokenizes a corpus of text based on a vocabulary (which is probably more than just the words in the dictionary), uses an embedding model to translate these tokens into a vector of semantic “meaning” which minimized loss in a bidirectional encoding (probably), that is then trained against a rubric for one or more topic area questions, retrained for instruction and explainability, retrained with reinforcement learning and human feedback to provide guardrails, and retrained again to make use of supplemental materials not part of the original training corpus (resource augmented generation), then distilled, then probably scaled and fine tuned against topic areas of choice (like coding or Korean or whatever) and maybe THEN made available to people to use. There are generally more parts to curriculum learning even than that but it’s a representative-ish start.
My point being that, yes, it would be nuts to pose ANY question to a predictor that says “with 84% probability, the word that is most likely follows ‘I really like’ is ‘gooning’ on reddit”, but even Grok is wildly more sophisticated than that and Grok is terrible.
Edit: And also I really like your take at the start of this thread: user error is a pretty huge problem in this space.
The training is sophisticated, but inference is unfortunately really a text prediction machine. Technically token prediction, but you get the idea.
For every single token/word. You input your system prompt, context, user input, then the output starts.
The
Feed the entire context back in and add the reply “The” at the end.
The capital
Feed everything in again with “The capital”
The capital of
Feed everything in again…
The capital of Austria
…
It literally works like that, which sounds crazy :)
The only control you as a user can have is the sampling, like temperature, top-k and so on. But that’s just to soften and randomize how deterministic the model is.
Edit: I should add that tool and subagent use makes this approach a bit more powerful nowadays. But it all boils down to text prediction again. Even the tools are described per text for what they are for.
Unless that’s how people are designing front ends for models, it literally DOESN’T work like that. It works like that until you finish training an embedding model with masking related tasks, but that’s the tip of the iceberg. The input vector, after being tokenized, is ingested wholesale. Now there’s sometimes funny business to manage the size of a context window effectively but this isn’t that unless you’re home-rolling and you’re caching your own inputs or something before you give it to the model.
Most people on Lemmy probably haven’t given it a single minute let alone 5 minutes.
Just yesterday I had one of those moments of grace that are becoming commonplace.
Basically I have to migrate a service from a n8n workflow to an actual nodejs server for performance reasons. I spent 15 minutes carefully scoping the migration, telling it exactly what tools to use and code style to adopt. Gave it the original brief and access to the n8n workflows.
The whole thing was done in 4 minutes and 30 seconds. It even noticed a bug which has been in production unnoticed for the past year. Gave me some good documentation on how to setup the Google service account, the kind of memory usage to expect so I can dimension the instant accordingly. Another five minutes and I had a whole test suite with decent coverage. I had negotiated with the client that it would take around a week, well that was the under promise of the year…
People who go around telling it doesn’t work are incompetent, out of their minds or straight up lying.
At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.
I don’t know what they are using cause all agents routinely do that. I suspect they are fibbing or tested things out in 2024 and never updated their opinion.
I create custom embedded devices with displays and I’ve found it very useful for laying things out. Like asking it to take secondly wind speed and direction updates and build a Wind Rose out of it, with colored sections in each petal denoting the speed… it makes mistakes but then you just go back and reiterate on those mistakes. I’m able to do so much more, so much faster.
I had the same experience. Asked a local LLM about using sole Qt Wayland stuff for keyboard input, a the only documentation was the official one (which wasn’t a lot for a noob), no.examples of it being used online, and with all my attempts at making it work failing. it hallucinated some functions that didn’t exist, even when I let it do web search (NOT via my browser). This was a few years ago.
This was a few years ago.
That’s 50 years in LLM terms. You might as well have been banging two rocks together.
Yeah, now we’re in the iron age!
…Where we get to bang two ingots together
I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.
Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.
I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.
It’s mostly not a thing developers do. It’s a thing the tools themselves do when asked to make a commit.
Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.
I understand the hatred towards AI, but people gotta understand that there’s a difference between coding with AI and Vibecoding. They are DIFFERENT THINGS! AI is userful, what is not are both vibecoding and shaming a developer with 30 years of real world experience with no AI support for using it for once. Using AI is ok if you do that critically and with common sense
If it’s making commits for you you’re vibe coding.
I use it at work, I use it for troubleshooting and if I get it to generate anything for me, I stage them and review them before committing myself
No, vibe coding explicitly requires NEVER looking at the actual code. I can give claude a ticket, it creates a plan. I review that plan, maybe change some things. Then claude does the thing. I review the code, then tell claude to fix X. Then I test, then I tell claude to create a commit.
There we have claude creating a commit without any of it being vibe coded
That is still vibe coding.
Jokes on you, I’ve used it to untangle messy git problems (with a backup of course).
You can do that with 99.9% less damage to the environment and the working class with
git -f rebase, or even the old tried and true method ofrm -rf && git pull ....Not in my case I’m afraid.
Oh you have to deal with the actually gnarly parts of git… sending my condolences.
It’s OK, I did it to myself. Claude fixed it though. :) I think it’d still be broken otherwise lmao
Pass
Please, go ahead and remove everything “AI” in your life. No social media. No GPS. No assist when driving or being driven. No streaming of any kind. No meteo apps. Ask your boss to remove everything related to prevision in his company. Ask your doctor to not use any tool to help his diagnosis if you have a scanner for cancer.
Let’s see how many of those you can “pass”. Or let’s see if it helps you develop a critical mind about to use which tool for which job and how to use it.
I’m already full Linux at work. Location on my mobile is always OFF unless I need it on rare occasions. I don’t stream. I self host.
Say your last sentence into a mirror today.
Bro, you are on fucking Lemmy. We are all like you. You are not special. You never ever use GPS to locate yourself, right? You never go from a to b. You never go in a shop to buy food. You never go to the doctor. You never buy anything online. You never watch YouTube. Sure.
Who hurt you
Ho, so that’s your argument? What a fucking kid. It’s easy to have an opinion. It’s harder to know why and not being a fucking parrot because you’re so edgy.
Are you asking people to be rational? What kind of monster are you
You are correct but people in general are pretty bad at subtly and grey area. Just look at the current state of political discourse in the US. Probably half the people that support the likes of Trump do so because they like black/white binary choice and can’t handle shades of grey in their life emotionally.
100% american public debate lol
I for news for you its the same thing. There is no difference besides maybe the prompt same AI is writing the code. And I do bit believe a coder is going over every single line of code.
you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.
You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.
It’s typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.
I’ve seen it play a few times already. A toxic community will take a dev who’s already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy… Maybe add a little light harassment on the side, as a treat. It’s a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.
The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.
I see your point. I might also have responded poorly to that, on some level at least.
Yeah same. I’d like to think i’d answer “I’ll use AI, if you don’t like it you can fork the project and i wish you good luck. Go share your opinion on AI in an appropriate place.”. But realistically there’s a high chance it catches me on a bad day and i get stupid.
… You’re right. I definitely wouldn’t be above such a response.
The problem is, a lot of people here - myself included - were/are also being impulsive about their responses to this issue, at least partially due to all the shitty stuff caused by GenAI.
There might be some toxic people too, I wouldn’t be surprised - but this can happen without them, too.
The thing is, toxic people thrive in mob situations and are often found leading or even manufacturing them. I tend to be wary around this kind of setups as they are easy to get caught up in and hard to get out of.
The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.
No, it was literally an important question to have answered. And booooy did the dev answer.
Is it appropriate to ask a stranger a question by first calling their work “slop” ? Is that how you communicate with people ? How is that working out irl ?
Y’all are so immersed in bully culture that this seems normal to you smh
Wow, calling asking to identify if something is a thing by the name of the thing that it’s being asked about is “bully culture” now? This is a whole new low level of argument in the pro-AI take.
So yes, you think this is normal human behaviour. Good luck with that shit, i hope the world treats you with the same energy.
Trolling? They gave a pretty good answer explaining their reasoning.
I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.
Seems pretty obvious to me that they knew this wouldn’t go over well. It was inflammatory by design.
Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.
They want to put clanker code that they freely admit they don’t validate into a product that goes on the computers of people who’s experience with Linux is “I heard it’s faster for games”
It’s irresponsible to hide it from review. It doesn’t matter if AI tools got better, AI tools still aren’t perfect and so you still have to do the legwork. Or at least let your community.
Also, you should let your community make ethics decisions about whether to support you.
Overall it was a rash reaction to being pressured rudely in a GitHub thread; but you know AI is a contentious topic and you went in anyway. It’s weak AF to then have a tantrum and spit in the community’s face about it.
Nothing is being hidden from review. The code is open source. They removed the specific attribution that indicates which parts of the code were created using Claude. That changes absolutely nothing about the ability to review the code, because a code review should not distinguish between human written code and machine written code; all of it should be checked thoroughly. In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.
In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.
Oh, it’s more than subconscious, as you can see in this thread.
Lutris developer makes a perfectly sane and nuianced response to a reactionary “is lutris slop now” comment, and gets shit on for it, because everybody has to fight in black and white terms. There are no grey opinions, only battle lines to be drawn to these people.
What? Are you all going to shit on your lord and savior Linus himself for also saying he uses LLMs? Oh, what, you didn’t know?!?
The response is only nuanced until the “good luck” sentence. If he swallowed that it would be an almost perfect response. But that sentence is a quite big “fuck you”.
A little personal flourish doesn’t invalidate the rest IMO. Humans get aggravated and humans are aggravating.
Yes, and I didn’t say that. I even argued in favor of his response thoughout this whole post (getting a shit ton of downvotes all along). But I think that doesn’t invalidate my point either: without this one sentence, his whole chain of arguments would have been pretty good and reasonable. It was just unnecessary to then add this snarky remark. It’s understandable if he’s pissed, but just because you are pissed when you say something doesn’t make what you said a clever move.
I get it. You can’t get by “Ai iS slOp” at top level comments anymore. I get that kind of ending because I would add it… but then I also don’t mind collecting downvotes so ymmv I guess.
It’s not as much of a “fuck you” as much as “I’m tired of this same fucking response, when all I’m trying to do is get some work done, which I do for fucking free, by the way”.
I agree with you with the current state of things in the world is hard to keep up and easy to complain. I’d say instead of asking the guy to not use AI ask him what he needs for help. He’s clearly stating that he’s in burnout.
I don’t have the time or skills to help so I wouldn’t go complaining.They are on liberapay if you want to support the project btw. Combined with Patreon, they sit at less than 700$ a week. That’s like half a dev before tax
?

Yes, that’s Liberapay. You may have noticed that I mentioned Patreon.
You might as well donate to Anthropic.
They were at least. Now they’re making Claude’s code freely available.
AI is actively destroying the environment and harming people. Data centers have been caught using methane burner generators (which are banned for use by the EPA) which significantly increase health risk to residents that live nearby (cancer and asthma rates already significantly increased). Then you have the ridiculous effects it is having on computer hardware markets, energy and water infrastructure and prices.
Then after all of that, the AI themselves are hallucinating somewhere in the neighborhood of 25% of the time, and multiple studies have found that people that use them regularly are losing their own skills.
I can’t figure out why people would choose to use them. I can’t figure out why programming is the one place where people that might have otherwise been considered experts in the field are excited to use them. Writers, artists, lawyers, doctors, basically every other professional field that AI companies have suggested these would be good for, they get trashed by experts in the fields for making garbage. I have a hard time believing the only thing AI can do well is write code when it sucks so badly at everything else it does. Does development suck this much? Do developers have so little idea what they are doing that this seems like a good idea?
Yeah, this is actually one of the good things a technology like this can do.
He’s dead right, in terms of slop, if it’s someone with training and experience using a tool, it doesn’t matter if that tool is vim or claude. It ain’t slop if it’s built right.
It ain’t slop if it’s built right.
Yeah but the problem is, is it? They absolutely insist that we use AI at work, which is not only insane concept in and of itself, but the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?
He says it helps him get work done he wouldn’t otherwise do, but how’s that possible? how is it possible that he is giving every line of code the same scrutiny he would if he wrote it himself, if he himself admits that he would never have got around to writing that code had the AI not done it? The math ain’t matching on this one.
the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?
When was the last time you coded something perfectly? “If I have to nanny you to make sure you don’t make a mistake, then how are you a useful employee?” See how that doesn’t make sense. There’s a reason why good development shops live on the backs of their code reviews and review practices.
The math ain’t matching on this one.
The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.
There’s also something to be said about the value in being able to tell an LLM to go chew on some code and tests for 10 minutes while I go make a sandwich. I get to make my sandwich, and come back, and there’s code there. I still have to review it, point out some mistakes, and then go back and refill my drink.
And there’s so much you can customize with personal rules. Don’t like its coding style? Write Markdown rules that reflect your own style. Have issues with it tripping over certain bugs? Write rules or memories that remind it to be more aware of those bugs. Are you explaining a complex workflow to it over and over again? Explain it once, and tell it to write the rules file for you.
All of that saves more and more time. The more rules you have for a specific project, the more knowledge it retains on how code for that project, and the more experience you gain in how to communicate to an entity that can understand your ideas. You wouldn’t believe how many people can’t rubberduck and explain proper concepts to people, much less LLMs.
LLMs are patient. They don’t give a shit if you keep demanding more and more tweaks and fixes, or if you have to spend a bit of time trying to explain a concept. Human developers would get tired of your demands after a while, and tell you to fuck off.
The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.
But the problem never was typing in the actual code. The majority of coding is understanding the problem you’re trying to solve and figuring out a good solution. If you let the AI do the thinking for you, then you’re building AI slop. You can’t review your way out of it because a proper review still requires that level of understanding the problem. If you just let the AI do the typing for you, there’s very little to be gained there as the time spent typing is negligible.
AI may be good at building simple, boilerplate-level code. But that’s what we have junior developers for. Junior developers we need because they grow into medior and senior developers.
If you let the AI do the thinking for you, then you’re building AI slop.
No, for major projects, you start out with a plan. I may spend upwards of 2-3 hours just drafting a plan with the LLM, figuring out options, asking questions when it’s an area I don’t have top-familiarity with, crafting what the modules are going to look like. It’s not slop when you’re planning out what to do and what your end result is supposed to be.

People who talk this way have zero experience with actually using LLMs, especially coding models.
Oh so I didn’t vibe code a go program that I have no understanding of the language cause I knew what I wanted the program to do in the end. Got you I am now a go developer. I didn’t just ask the ai to do something I new which library I wanted it to use and new what I wanted it to interface with and new exactly what I wanted it to do.
I didn’t just ask the ai to do something I new which library I wanted it to use and new what I wanted it to interface with and new exactly what I wanted it to do.
I have no understanding of the language
No shit… you don’t even have an understanding of the English language. No wonder the LLM didn’t understand you.
This really depends on the project. For example, if you’re creating a CRUD web app for managing some kind of data, the main tough decisions involve system and data architecture. After that, most other work is straight forward menial work. It doesn’t take a genius to validate a gajillion text fields for a specific min and max length, map them to the correct field in the API, validate on the server again, and write them to the correct database field.
I agree that AI might screw companies over in the long run, when there’s no more juniors that can become seniors. That doesn’t apply to this case at all.
Well, I’m not a code monkey, between dyslexia and an aging brain. But if it’s anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don’t really have to pore over every single line. Only time that’s needed is when something is broken. Otherwise, you’re scanning to keep oversight, which is no different than reviewing a human’s code that you didn’t write.
Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren’t logical by nature.
If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it’ll get that review even if the project maintainers slip up.
And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.
Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).
My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.
Maybe I’m too far behind the various languages, but I really can’t see it being a massively harder proposition to scan and edit the output of an llm.
You do have to consider that this is an opensource developer creating something free in his free time. The app is also not life or death. Meaning, his quality standards are UNDERSTANDABLY not as high as if he was working for money on a banks money system.
In the end, all of the complainers are welcome to do the work themselves. That way, he won’t have to use AI at all.
slop is slop.
microslop
slopware
slopity slop slop.
And talking in absolutes without looking for nuance is not mature nor does it use any form of critical thinking.
I’m sorry. you’re absolutely right. I shouldn’t have said that.
It is awesome that you left the previous comment in place. Mad props!
Lmao I see what you did there
Somehow hiding the code feels worse than using the code. This whole thing is yuck.
Well when you have a massive problem of harassment, death threats and fucking retarded shit stains screaming at every single dev that is even theorized to use ai regardless if it’s true or not.
I blame fucking no one for hiding the fact.
This is on the users not the dev. The users are fucking animals and created this very problem.
Blaming the wrong people and attacking them is the yuck.
Scream at the executives and giant corpos who created the problem not some random indie dev using a tool.
Then just quit it isn’t worth it. I know AI has uses and is useful.
Yeah, management wants us to use AI at $DAYJOB and one of the strategies we’ve considered for lessening its negative impact on productivity, is to always put generated code into an entirely separate commit.
Because it will guess design decisions at random while generating, and you want to know afterwards whether a design decision was made by the randomizer or by something with intelligence. Much like you want to know whether a design decision was made by the senior (then you should think twice about overriding this decision) or by the intern that knows none of the project context.
We haven’t actually started doing these separate commits, because it’s cumbersome in other ways, but yeah, deliberately obfuscating whether the randomizer was involved, that robs you of that information even more.
Honestly, unfortunately, I agree. It IS unfortunately helpful, and if you’re a competent developer using AI tooling, you can make sure it doesn’t generate slop. You are responsible for your code, at the end of the day.
AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.
Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I’m not surprised lutris’ maintainer went off like they did, the issue is not made with good faith.
Yes, both threads are led by two accounts with probably less than 50 commits to their names during the last year, none of which are of any relevance to the subject they are discussing.
In a world where you could contribute your time to make some things better, there is a certain category of people who seek out nice things specifically to harm them. As open source enters mainstream culture, it also appears on the radar of this kind of people. It’s dangerous to catch their attention, as once they have you they’ll coordinate over reddit, lemmy, github, discord to ruin your reputation. The reputation of some guy who never ever did them any harm apart from bringing them something they needed, for free, but in a way that doesn’t 100% satisfy them. Pure vicious entitlement.
I’d sooner have a drink with a salesman from OpenAI than with one of them.
Just, what kind of pleasure can one derive from harming these projects? It’s so frigging weird, man.
Throwing down people is the easiest way to stand above them. 😒
If he’d just forgone that last paragraph…
Open source stuff is awesome and I really like people improving Linux in their spare time
But, to do it this way is basically saying “fuck you” to the community which is fucked up.
Could have talked about how AI helps him or how he uses it for templates or whatever and damn even if I didn’t agree with those points either that’s a lot better than being like “alright good luck finding it now then bitch”
I wouldn’t mess with anything this guy does anymore after this.
Are you talking about his way of communicating or about his AI use? I think it could have been said a bit more level headed, but I mostly agree with what he’s said. I also see no issue with the part “good luck finding it then” that seems to sound malicious to you. To me this means “if you can’t find a difference in quality, your whole complaint is invalid because there basically is no difference in quality”. Yes, it’s still AI and should not be viewed as more than a knowledgeable intern, yada yada, but I hope the point comes across…
If there’s no difference in quality why obfuscate it? Why hide something that you think is a valuable tool if your code can speak for itself?
He could have used that opportunity to take a standing his own way “this is what I am doing and if you don’t like it feel free to make a fork but I think this is blown out of proportion for: (reasons he could list his opinions on)”
But being like “good luck finding it now” is 100% malicious in this context. Or if malicious is too strong of a word for this, its definitely not user friendly at all.
And certainly not very “open”.
I don’t see it as obfuscation if there is no underlying difference. Why treat working code differently depending on the source if what matters is that it works (which it does by definition). Of course there has to be more quality control if AI is able to produce more code, but I don’t think that’s the point here right? Why highlight the different sources of the code if, as you said, the code can speak for itself. What’s the difference to you if you can’t tell them apart?
The difference is that AI is a known issue creator (that huntarrr app comes to mind) with many projects and AI usage is supposed to be disclosed transparently for compliance with copyrights and licensing.
But even despite all that its kind of a shitty way to go about it the way he did, in my opinion.
If there’s no difference in quality why obfuscate it? Why hide something that you think is a valuable tool if your code can speak for
The timeline was that he started adding attribution indicating the use of AI.
Then the anti-AI drones started bombarding the Github, Discord and forums with harassment. His recent statements and removal of attribution are entirely addressed at and because of the anti-AI people harassing the project staff.
He’s not removing it and saying ‘fuck you’ to the users. He’s tired of being harassed by third parties who are not involved with the project in any way and so he removed the source of the harassment.
In my opinion, he should’ve left it as a co author. I think if you as a user have an ethical issue with Claude, that’s your choice and you can make the decision not to use lutris. I mostly agree with what he says until that part about removing Claude so “good luck finding it”.
It’s not about finding a difference for people (usually), it’s about how that model was trained on the work of others, without consent, for free, to then sell. He made his points about how much it helps, that it’s better than using Meta, Google, OpenAI, or Copilot and I think that’s probably true. But he made that case, so why then hide what Claude has done?
In gaming, Valve requires you to list if you have used AI in the creation of your game and you describe in what way. It’s not because the game will 100% of the time be absolute slop (right now it usually is), it’s so that the potential customer can be informed and choose to or not to support the use of AI in those products.
As far as I’m reading, most people who reviewed the actual code think it’s fine. So, again, I don’t see the point in hiding it other than being somewhat petty.
Right, fair point with the training data.
I don’t see the point in hiding it other than being somewhat petty.
The point in hiding it was that it was being used, without harassment or complaint, right up until he added attribution which resulted in an avalanche of complaints which require resources to deal with. Discord, the forums and Github pull requests now require much more moderation labor, which takes away from the project.
People had no complaints about the code quality until he started adding AI attribution. So he removed the attribution.
Like he said, if people can’t tell the difference until he started marking the code AI assisted… then they don’t actually have an argument and are simply bringing anti-AI politics into the project.
Think of it like a jeweller suddenly announcing they were going to start mixing in blood diamonds with their usual diamonds “good luck finding them”.
Functionally, blood diamonds aren’t different.
Leaving aside that you might not want blood diamonds, are you really going to trust someone who essentially says “Fuck you, i’m going to hide them because you’re complaining”
If you don’t know what blood diamonds are, it’s easily searchable.
I’ll go on record as saying the aesthetic diamond industry is inflationist monopolist bullshit, but that doesn’t alter the analogy
Secondly, it seems you don’t really understand why LLM generated code can be problematic, i’m not going to go in to it fully here but here’s a relevant outline.
LLM generated code can (and usually does) look fine, but still not do what it’s supposed to do.
This becomes more of an issue the larger the codebase.
The amount of effort needed to find this reasonable looking, but flawed, code is significantly higher than just reading a new dev’s version.
Hiding where this code is makes it even harder to find.
Hiding the parts where you really should want additional scrutiny is stupid and self-defeating.
Thanks, I think your first point is a really valid one. AI technology is far from clean, especially in a political scope.
To your second point. I see that, but on the other hand, it makes an impression on me as if human code would be free of such errors. I would not put human code on an (implied) pedestal (especially not mine), but maybe I’m missing your point. I think being suspicious about AI code is good but same goes for human code. To me it sounds like nobody should ever trust AI code because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst. At some point there is no difference anymore between “it looks fine” and “it is fine”.
Let’s assume we’re skipping the ethical and moral concerns about LLM usage and just discuss the technical.
it makes an impression on me as if human code would be free of such errors
Nobody who knows anything about coding is claiming human code is error free, that’s why code reviews, testing and all the other aspects of the software development lifecycle exist.
To me it sounds like nobody should ever trust AI code
Nobody should trust any code unless it can be verified that it does what is required consistently and predictably.
because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst
This is a known thing, paranoia doesn’t really apply here, only subjectively appropriate levels of caution.
Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.
Whether or not these problems can be overcome (or mitigated) remains to be seen, but at the moment it still requires additional effort around the LLM parts, which is why hiding them is counterproductive.
At some point there is no difference anymore between “it looks fine” and “it is fine”.
This is important because it’s true, but it’s only true if you can verify it.
This whole issue should theoretically be negated by comprehensive acceptance criteria and testing but if that were the case we’d never have any bugs in human code either.
Personally i think the “uncanny valley code” issue is an inherent part of the way LLM’s work and there is no “solution” to it, the only option is to mitigate as best we can.
I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.
Thanks for taking the time to reply.
Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.
Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..
I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.
I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?
Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..
Both.
The reasons are quite hard to describe, which is why it’s such a trap, but if you spend some time reviewing LLM code you’ll see what I mean.
One reason is that it isn’t coding for logical correctness it’s coding for linguistic passability.
Internally there are mechanisms for mitigating this somewhat, but its not an actual fix so problems slip through.
I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?
The latter, if you give it the exact same input in the exact same conditions, it’s not guaranteed to give you the same output.
The fact that its sometimes close to the same actually makes it worse because then you can’t tell at a glance what has changed.
It also isn’t a simple as using a diff tool, at least for anything non-trivial, because it’s variations can be in logical progression as well as language.
Meaning you need to track these differences across the whole contextual area which, if you are doing end to end generation, is the whole codebase.
As I said, there are mitigations, but they aren’t fixes.
Oops. Guess I’m uninstalling Lutris.
Personally, I have blocked Claude on GitHub, which helpfully puts a huge banner on any project it has infected.
Then unless I have absolutely no choice but using it, I get rid of it.
To be honest I don’t give a shit if a dev uses AI or not. As long as the code does what it is suppost to. In my personal experience AI, while still not anywhere near to capabilitys of a decent dev, can sometimes find and fix errors that I would have missed.
When we write code we use a compiler to translate it into other code that the computer can understand. Now we tell AI to write code that is then compiled into other code that the computer can understand.
It seems very similar at the end of the day. The problem is it makes the process easier. That’s what everyone is so upset about. And that’s only an issue because we don’t feel special anymore. It sucks but I’m sure it will pass. Even if it takes a generation
I must disagree with you here. Telling the compiler what to do is not like prompting an LLM. I see writing code as a form of art and a big part of that is understanding the logic behind the program and the creating process. Imagine it like painting a picture. The artist/dev will undergo all the stages of drawing/coding the vision will change in the process and the outcome might be different then what was originally anticipated.
This pipeline of creating gives the project usually a better result. One could say it gives the project more soul.
With AI you are no longer the artist you are the manager requesting the result and since AI does not undergo this process of creativity the result is a soulless husk. At best only what you asked for but nothing more.
If people where complaining about AI because of its ease of use the same people would be complaining about pythons approach of humanspeech-like-code. (Not saying that there are no people that do so)
So with this logic are you also not an artist if you use tools like Photoshop? Do you need to write with pen and paper?
Is writing code in any language other than assembly also cheating?
I don’t know why this reply is being downvoted
If I had to guess, it’s probably because most gamers aren’t programmers.
No of course not. Did you even finish reading my comment? I thought made it clear that the ease of use is not the issue. The lack of creativity is. Using Photoshop still requires you to think about what you want and how to get there. AI just gives you the output. There is no creativity involved in prompting.
When the first drawing tablets came out people loved them. Almost no one was the impression that it was “cheating”. Even with the use of AI you can still make creative projects but the creativity comes from you. Vibecoding or using image-gen does not involve creative thought.
EDIT: Imagine playing a game made by someone who is not passionate about their work. That’s what it feels to play an AI made game.
There’s a difference between using AI to help you code and pure vibe coding. The latter is how you end up with slop, but the former can absolutely speed up skilled developers.
Same is true across the board with AI use. It can easily be a force multiplier for people as long as you don’t turn off your brain and slop away.
Vibecoding is idea driven implementation. You have an idea, you are creative in your ideas and not in the implementation.
“Tell you never wrote code before without telling me you never wrote code before”-ass answer.
It’s similar, but it’s not the same thing.
Anyone can have an AI “write code”, but ultimately, you’re still responsible for the output of the AI and ensuring that the end result is good. If you are a competent developer, you know things like testing, storage, security and safety (especially when dealing with sensitive data like user data), backups, monitoring, etc along with understanding each line of code. AI will never be perfect because humans aren’t perfect either, AI requires code review just like humans require code review. If you aren’t a programmer, you won’t be able to review the code AI writes, and mistakes will be missed, just like not reviewing human-written code because humans make mistakes too. I don’t see that ever changing because no software is perfect, there will always be bugs no matter what (once the software is complex/sophisticated enough).
AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.
That’s my thoughts on AI and especially AI coding. That ended up being much longer than I expected and there’s more to it but you get the idea.
I never said anything about not reviewing the code. You still need to review it and test it and all that. But using a tool to generate the code isn’t the end of the world. It’s just the next iteration of how we tell computers what to do. Saying no ai code seems like a recipe for failure.
Or at least create boilerplate, test cases, etc.
The ai to do tests and boilerplate was like AI 3 months ago. Now just genuinely oneshots complex implementations
I know, as long as you don’t want scalability, maintainabiliy, reliability or security.
I use AI to look at my git diffs before I push them up. I use a local LLM and specifically instruct it to look for typos, left over debug prints, or stupid logic.
It’s caught quite a few stupid things that I’m apparently blind to and my coworker appreciates it.
That’s not to say I’d sit back and let it write whole features, pushing it right to master after a short skim… Like someone else I know has started doing. But it can absolutely have a useful purpose.
If he’s using like an IDE and not vibe coding then I don’t have much issue with this. His comment indicates that he has a brain and uses it. So many people just turn off their brain when they use AI and couldn’t even write this comment I just wrote without asking AI for assistance.
Yeah, that’s my biggest worry. I always have to hold colleagues to the basics of programming standards as soon as they start using AI for a task, since it is easier to generate a second implementation of something we already have in the codebase, rather than extending the existing implementation.
But that was pretty much always true. We still did not slap another implementation onto the side, because it’s horrible for maintenance, as you now need to always adjust two (or more) implementations when requirements change.
And it’s horrible for debugging problems, because parts of the codebase will then behave subtly different from other parts. This also means usability is worse, as users expect consistency.And the worst part is that they don’t even have an answer to those concerns. They know that it’s going to bite us into the ass in the near future. They’re on a sugar high, because adding features is quick, while looking away from the codebase getting incredibly fat just as quickly.
And when it comes to actually maintaining that generated code, they’ll be the hardest to motivate, because that isn’t as fun as just slapping a feature onto the side, nor do they feel responsible for the code, because they don’t know any better how it actually works. Nevermind that they’re also less sharp in general, because they’ve outsourced thinking.
Hell most people turn off their brains when the word gets mentioned at all. There’s plenty of basic shit an ai can do exactly as good as a human. But people hear AI and instantly become the equivalent of a shit eating insect.
As long as your educated and experienced enough to know the limitations of your tools and use them accurately and correctly. Then AI is literally a non factor and about as likely to make an error as the dev themselves.
The problem with AI slop code comes from executives in high up positions forcing the use of it beyond the scope it can handle and in use cases it’s not fit for.
Lutris doesn’t have that problem.
So unless the guy suddenly goes full stupid and starts letting AI write everything the quality is not going to change. If anything it’s likely to improve as he off loads tedious small things to his more efficient tools.
The problem is I’ve seen people who supposedly have a brain start to use a high and over time they become increasingly confident in the AI’s abilities. Then they stop bothering to review the code.
Then they stop bothering to review the code.
This happens with human code reviews all the time.
“I don’t really understand this code, but APPROVE!”
“You need this thing merged today? APPROVE!”
“This code is too long, and it’s almost my lunch break. APPROVE!”
Over and over and over again. The worse thing you can insult me with is take code I spend days working on and approve it five minutes after I submitted it to you.
20 line commit: 5 issues and suggestions.
500 line commit: “looks good to me!”
Do i feel bad about 10comments on my review, all with basic shit? Yes. Do i prefer that over Idiot managed to slip error surpression by review? Yes. At the end im Happy that someone looked deep enough to find my small stuff, so its not going to Master and life on for decades.
That is the problem. They become dependent on it and it is human nature to be lazy. So eventually the “safe guards” well come off.
Just wait in a couple months he’ll have a teenage girl sentient AI.
@gruk iz dis tru?




















