I’m not taking all the credit but I do hope those people who didn’t believe me in the past could rightfully take this comment, print it, pull down their pants and shove it up their ass.
It’s time to hold journalism with a higher standard and this idea that “well they do alright” and “it was only once” is bullshit sliding into madness.
Just the facts, folks.
The problem with your attitude towards this is that these companies are forcing “AI” down everyone’s throat. It’s a requirement now to churn out more bullshit than humanly possible.
This person was simply fired because they didn’t catch the false information, and not because they used the tools forced upon them.
To be fair to Ars Technica, that doesn’t sound like the case to me.
The “journalist” in question seems to be suggesting that this was their own bad judgment to use AI to “find relevant quotes” from the source material.
Having said that, there’s also a senior editor on the by-line who hasn’t been held accountable for clearly failing to do their job, which as I understand it, is to read, edit and verify the contents of the article. So in a way Ars seems to have a problem with quality whether or not the use of AI was mandated.
Ars is owned by Conde Nast who has multiple whistleblowers saying AI is being forced on them. Think that’s kind of relevant.
Is there any evidence this is happening at Ars Technica? They’re pretty transparent about their methods, and obviously tech-savvy. Just because it happened at Teen Vogue doesn’t mean it’s happening at Ars. Conde Nast publications seem to be run pretty independently. Take The New Yorker, their content remains amazing and seems fully independent.
Most companies have AI forced, either directly or indirectly (“you need to double your output, AI can help…” kind of thing)
It’s relevant in a situation where the author has not accepted responsibility.
Sifting through information to find out what’s true and what’s not, before presenting it to the public, is a pretty crucial task and ability for an actual journalist though. It is probably one of the most important parts of their job to verify the correctness of their sources and what they write regardless of whether or not they use AI tools.
Then maybe they shouldn’t be using these tools in the first place. Other Conde Nast employees have already been blowing the whistle about this, which is funny because they sued all the AI companies for stealing content.
Whether there is a news article about it or not, these shitty tools are being shoved down everyone’s throats. From developers, to authors.
Then maybe they shouldn’t be using these tools in the first place
I absolutely agree, they should not write articles with LLMs. I’m just saying they’re not absolved of basic journalistic responsibility because they’re instructed to use LLM tools.
You’re absolutely correct. But the problem is bigger than the rogue journalist. Separation of duties is a well known requirement for robust, reliable processes immune to single points of failure (whether malicious or, as I suspect in this case, merely grossly negligent and irresponsible). It is necessary but not sufficient to hold just the journalist who used AI responsible for the publication of false statements.
The problem here is you are both characterizing Ars as you would other companies that have these AI mandates. Ars is the opposite, they have a mandate NOT to use AI.
While I agree a separation of responsibilities is important, they had two coauthors for exactly that reason. One trusted the other for the references, not knowing that they used AI.
Either way, the initial comment is certainly not “absolutely correct” when it comes to Ars.
Absolutely not. Ars has a no AI policy, it’s the exact opposite. Guessing you are a nice little bot.
A fucking moron who runs around calling everything a bot when you disagree with whatever the topic is.
It’s the new CyberTruck of online insecurity.
Hope that’s “good” enough for you.
and “it was only once” is bullshit
They checked and then fired the author. I don’t see how this is “it was only once” implying nothing changed and it will happen again. Isn’t firing the author “holding journalism to a higher standard” already, which you ask for?
Maybe they should do more than just fire a person who was caught using AI. Maybe they should establish a process of independent fact checking before publication, regardless of whether AI was known or intended to be used to produce the article. It is a problem that AI was used in a way that introduced factual errors. It’s fair that the person responsible for this was fired. But all processes need quality control. Why hasn’t the person who failed to wrap quality control processes around the author fired?
in what world would independent fact checking down to the level of individual quotes be feasible for an online magazine? you can’t be serious.
That’s part of the cost of AI that the AI companies leave to their customers. There is a tradeoff and we know from a long history of for-profit corporate behaviour that they will generally prefer lower short term cost, despite consequent risk and harm. But if the companies that sell AI services don’t take care to ensure the outputs are true and the companies that use AI don’t take care then that leaves the ultimate customer/consumer to fact check everything. That or simply be oblivious or stop trusting anything. The problem is made worse by the fact that most companies won’t disclose their use of AI, because of the adverse impact on their reputation, unless they are compelled to do so. So far, I don’t see any legislation to compel disclosure.
“futurism has confirmed”. Later on the article: “reached out to three parties, no replies and no comment”.
Huh? So how did they confirm?
Obviously the use of a LLM was a terrible decision, but I think in this context we can also blame some country’s lack of sick pay.
AI - damned if you do and damned if you don’t. And it’s not just journalism affected.
In this case it was very much NOT “damned if you do, damned if you don’t”–It’s just don’t.
As a journalist it’s your whole fucking job to do the research and report things accurately and truthfully. There’s no reason at all the “journalist” in question here should have had an AI generated anything for his shitty article.
The fact that this was a story on AI misuse in the first place only adds insult to injury.
There’s no reason at all the “journalist” in question here should have had an AI generated anything for his shitty article.<
Except that there is a requirement in Conde Nast to use AI.
As a journalist it’s your whole fucking job to do the research and report things accurately and truthfully.<
That is what the AI is supposed to be for.
They can’t have it both ways - either they demand AI and accept the consequences, or they give sufficient resources to staff to complete their work without it.
And yet, if you don’t, you will be undercut by the grossly subsidized AI and out of a job, either individually if your management leans AI or the whole enterprise if they don’t, replaced by the AI slop factories.
Yeah. But there’s always the risk of being undercut by someone or something cheaper if you’re operating in a workplace with zero standards. After all, you could write a lot of articles if you didn’t give a rat’s ass about the veracity or quality of the information within.
Good newsrooms are supposed to have standards–that’s what makes them good.
If this the people at Ars had done their jobs to a high standard, the article in question wouldn’t have been written like that in the first place, let alone edited and published as is. They want to fire the writer in question, and the writer wants to blame being sick, but the fact remains that the publishing of that article reveals a systemic problem with how Ars are operating, and a total lack of editorial standards.
The elite don’t need the masses to be informed, they need them to be placated and oblivious or confused about what is happening, so they support what is contrary to their interests - idolize and support the elite. Good newsrooms don’t serve the purposes of those that own them. AI producing slop with embedded propaganda serves them. It has only just begun. Watch young people on TikTok, sopping up the numbing propaganda. It is the future - now controlled by US elites. Like programmers who know their code, accountants that know their books, and so many other professionals who pride themselves on the quality of their work, journalists who do their jobs to a high standard are being replaced. It will be very good for a few - those that can afford quality, free from slop and misinformation. But that’s not the audience of Ars.
I have yet to see a field where LLMs are a net positive. At best scammers can dupe people easier and faster than ever but between writing, programming, etc the avg productivity gain is typically negligible at best to achieve work of similar quality with or without LLMs.
It is useful in some specific fields like protein folding:
https://www.nature.com/articles/s41586-021-03819-2
The problem is people think it can replace people which is wrong, it is a tool and should be used as such not as a replacement.
Those aren’t LLMs.
Oh your right my mistake. I guess unit testing and debugging are useful. I did use copilot to find a missing slash. Also useful for revising email and paragraphs, of course you have to review it. It also should never be used for scientific research and journalism. Of course it doesn’t justify the investments into LLMs, we should focus on more useful things like alpha fold
Unit testing with LLMs is just asking an AI to hallucinate requirements.
Tests are what documents expected behavior and are therefore the worst candidate for code gen.
Or, you know, double-check that the quotes given to you by the experimental AI “quote extractor” tool are accurate?
He is (was) their go-to AI reporter. It’s not like they handed the assignment to an intern and said “go nuts.”
And the article was about AI fabricating an attack on a developer that rejected its PR.
The whole point of using AI is that its a search tool and that is the verification.
Otherwise there’s no point in using it.
And you can guarantee Conde Nast demands journalists use AI all the time.
As they should
I would fire them and hope that they are blacklisted from ever working in journalism ever again
I’ve interacted with Benj Edwards on social media for some time. He’s done lots of good work! He’s on (or maybe used to be on) Mastodon and Bluesky. He runs Vintage Computing and Gaming, and has written good articles for several prominent places. I’ve said as much in multiple forums, I feel like I’ve maybe been going on a crusade.
I haven’t seen many others defending him. I’m really torn up over this. They had a weak moment. They were sick (I mean, literally.). A few other people, notably Cory Doctorow and Paul Ford, have written LLM-defending places. And the AI hype has been deafening.
It’s amazing though, that so soon after he used AI, that it immediately hallucinated something job-ending. I knew it was really bad, but I didn’t know it was THAT bad. You get the sense, with so many people talking positively about it, that the hallucinations must be something that happens, what, maybe 5% of the time?
To me, it seems like the kind of mistake that he should be able to apologize for, promise not to do it again, and move on. But we’ve all had our good will taken advantage of for so long by malicious actors, like how Gamergate was used as a wedge to push loathsome politics onto a legion of young males. It feels like we can’t give anyone the benefit of the doubt any more.
I don’t know. I know I’m influenced by all the good work he’s done. I feel like that shouldn’t all be thrown away.
why the fuck wouldn’t a journalist double check the things that the AI is returning? in what universe is this even considered journalism? it’s so crazy to me that I can’t imagine how it even happened. it’s too stupid for my imagination
He was sick and had a weak moment. He didn’t realize that it would just make the quote up.
Cory Doctorow … have written LLM-defending places.
citation requested because everything I’ve seen them write is opposed
Took a bit but found it, it’s not ChatGPT but a small self-hosted AI with an open source model: https://pluralistic.net/2026/02/19/now-we-are-six/
TYVM!
Trying to track it down…
TY. Recently finished his book, Enshittification, it’s spot on. good read.
Whoa. There are actually consequences? ArsTechnica is actually sorry??
No, the worker was fired and the executive whose job title is making sure that the work submitted is correct was not fired.
The executives will get a bonus this year.
Maybe they heared @latenightlinux@mastodon.social










