The worst is in the workplace. When people routinely tell me they looked something up with AI, I now have to assume that I can’t trust what they say anylonger because there is a high chance they are just repeating some AI halucination. It is really a sad state of affairs.
I am way less hostile to Genai (as a tech) than most and even I’ve grown to hate this scenario. I am a subject matter expert on some things and I’ve still had people trying to waste my time to prove their AI hallucinations wrong.
I’ve started seeing large AI generated pull requests in my coding job. Of course I have to review them, and the “author” doesn’t even warn me it’s from an LLM. It’s just allowing bad coders to write bad code faster.
Do you also check if they listen to Joe Rogan? Fox news? Nobody can be trusted. AI isn’t the problem, it’s that it was trained on human data – of which people are an unreliable source of information.
AI also just makes things up. Like how RFKJr’s “Make America Healthy Again” report cites studies that don’t exist and never have, or literally a million other examples. You’re not wrong about Fox news and how corporate and Russian backed media distorts the truth and pushes false narratives, and you’re not wrong that AI isn’t the problem, but it is certainly a problem and a big one at that.
AI also just makes things up. Like how RFKJr’s “Make America Healthy Again” report cites studies that don’t exist and never have, or literally a million other examples.
SO DO PEOPLE.
Tell me one of the things that AI does, that people themselves don’t also commonly do each and every day?
Real researchers make up studies to cite in their reports? Real lawyers and judges cite fake cases as precedents in legal preceding? Real doctors base treatment plans on white papers they completely fabricated in their heads? Yeah I don’t think so, buddy.
But but but . . . !!!
AI!!
I think they’re saying that the kind of people who take LLM generated content as fact are the kind of people who don’t know how to look up information in the first place. Blaming the LLM for it is like blaming a search engine for showing bad results.
Of course LLMs make stuff up, they are machines that make stuff up.
Sort of an aside, but doctors, lawyers, judges and researchers make shit up all the time. A professional designation doesn’t make someone infallible or even smart. People should question everything they read, regardless of the source.
Blaming the LLM for it is like blaming a search engine for showing bad results.
Except we give it the glorifying title “AI”. It’s supposed to be far better than a search engine, otherwise why not stick with a search engine (that uses a tiny fraction of the power)?
I don’t know what point you’re arguing. I didn’t call it AI and even if I did, I don’t know any definition of AI that includes infallibility. I didn’t claim it’s better than a search engine, either. Even if I did, “Better” does not equal “Always correct.”
deleted by creator
To take an older example there are smaller image recognition models that were trained on correct data to differentiate between dogs and blueberry muffin but obviously still made mistakes on the test data set.
AI does not become perfect if its data is.
Humans do make mistakes, make stuff up, and spread false information. However they generally make considerably less stuff up than AI currently does (unless told to).
AI does not become perfect if its data is.
It does become more precise the larger the model is though. At least, that was the low-hanging fruit during this boom. I highly doubt you’d get a modern model to fail on a test such as this today.
Just as an example, nobody is typing “Blueberry Muffin” into a stable diffusor and getting a photo of a dog.
Joe Rogan doesn’t tell them false domain kowledge 🤷
LOL riiiiiight.
Ok please show me the Joe Rogan episode where he confidently talks BS about process engineering for wastewater treatment plants 🙄
There’s a monster in the forest, and it speaks with a thousand voices. It will answer any question, and offer insight to any idea. It knows no right or wrong. It knows not truth from lie, but speaks them both the same. It offers its services freely, many find great value. But those who know the forest well will tell you that freely offered does not mean free of cost. For now the monster speaks with a thousand and one voices, and when you see the monster it wears your face.
Not just you. Ai is making people dumber. I am frequently correcting the mistakes of my colleagues that use.
My attitude to all of this is I’ve been told by management to use it so I will. If it makes mistakes it’s not my fault and now I’m free to watch old Stargate episodes. We’re not doing rocket surgery or anything so who cares.
At some point they’ll realise that the AI is not producing decent output and then they’ll shut up about it. Much easier they come to that realisation themselves than me argue with them about it.
Luckily no one is pushing me to use Ai in any form at this time.
For folks in your position, I fear that they will first go through a round of layoffs to get rid of the people who are clearly using it “wrong” because Top Management can’t have made a mistake before they pivot and drop it.
Yeah that is a risk, then again if they’re forcing their employees to use AI they’re probably not far off firing everyone anyway so I don’t see that it makes a huge amount of difference for my position.
When i was a kid and firat realized i was maybe a genius, it was terrifying. That there weren’t always gonna just be people smarter than me who could fix it.
Seeing them get dumber is like some horror movie shit.
I don’t fancy myself a genius but the way other people navigate things seems to create a strangely compelling case on its own
My pet peeve: “here’s what ChatGPT said…”
No.
Stop.
If I’d wanted to know what the Large Lying Machine said, I would’ve asked it.
It’s like offering unsolicited advice, but it’s not even your own advice
Hammer time.
“Here’s me telling everyone that I have no critical thinking ability whatsoever.”
Is more like it
People are overworked, underpaid, and struggling to make rent in this economy while juggling 3 jobs or taking care of their kids, or both.
They are at the limits of their mental load, especially women who shoulder it disproportionately in many households. AI is used to drastically reduce that mental load. People suffering from burnout use it for unlicensed therapy. I’m not advocating for it, I’m pointing out why people use it.
Treating AI users like a moral failure and disregarding their circumstances does nothing to discourage the use of AI. All you are doing is enforcing their alienation of anti-AI sentiment.
First, understand the person behind it. Address the root cause, which is that AI companies are exploiting the vulnerabilities of people with or close to burnout by selling the dream of a lightened workload.
It’s like eating factory farmed meat. If you have eaten it recently, you know what horrors go into making it. Yet, you are exhausted from a long day of work and you just need a bite of that chicken to take the edge off to remain sane after all these years. There is a system at work here, greater than just you and the chicken. It’s the industry as a whole exploiting consumer habits. AI users are no different.
Let’s go a step further and look at why people are in burnout, are overloaded, are working 3 jobs to make ends meet.
Its because we’re all slaves to capitalism.
Greed for more profit by any means possible has driven society to the point where we can barely afford to survive and corporations still want more. When most Americans are choosing between eating, their kids eating, or paying rent, while enduring the workload of two to three people, yeah they’ll turn to anything that makes life easier. But it shouldn’t be this way and until we’re no longer slaves we’ll continue to make the choices that ease our burden, even if they’re extremely harmful in the long run.
I read it as “eating their kids”. I am an overworked slave.
We shouldn’t accuse people of moral failings. That’s inaccurate and obfuscates the actual systemic issues and incentives at play.
But people using this for unlicensed therapy are in danger. More often than not LLMs will parrot whatever you give in the prompt.
People have died from AI usage including unlicensed therapy. This would be like the factory farmed meat eating you.
https://www.yahoo.com/news/articles/woman-dies-suicide-using-ai-172040677.html
Maybe more like factory meat giving you food poisoning.
And what do you think mass adoption of AI is gonna lead to, now you won’t even have 3 jobs to make rent cause they outsourced yours to someone cheaper using an AI agent, this is gonna permanently alter how our society works and not for the better
No, it’s not just you or unsat-and-strange. You’re pro-human.
Trying something new when it first comes out or when you first get access to it is novelty. What we’ve moved to now is mass adoption. And that’s a problem.
These LLMs are automation of mass theft with a good enough regurgitation of the stolen data. This is unethical for the vast majority of business applications. And good enough is insufficient in most cases, like software.
I had a lot of fun playing around with AI when it first came out. And people figured out how to do prompts I cant seem to replicate. I don’t begrudge people from trying a new thing.
But if we aren’t going to regulate AI or teach people how to avoid AI induced psychosis then even in applications were it could be useful it’s a danger to anyone who uses it. Not to mention how wasteful its water and energy usage is.
the bubble has burst or, rather, currently is in the process of bursting.
My job involves working directly with AI, LLM’s, and companies that have leveraged their use. It didn’t work. And I’d say the majority of my clients are now scrambling to recover or to simply make it out of the other end alive. Soon there’s going to be nothing left to regulate.
GPT5 was a failure. Rumors I’ve been hearing is that Anthropics new model will be a failure much like GPT5. The house of cards is falling as we speak. This won’t be the complete Death of AI but this is just like the dot com bubble. It was bound to happen. The models have nothing left to eat and they’re getting desperate to find new sources. For a good while they’ve been quite literally eating each others feces. They’re now starting on Git Repos of all things to consume. Codeberg can tell you all about that from this past week. This is why I’m telling people to consider setting up private git instances and lock that crap down. if you’re on Github get your shit off there ASAP because Microsoft is beginning to feast on your repos.
But essentially the AI is starving. Companies have discovered that vibe coding and leveraging AI to build from end to end didn’t work. Nothing produced scales, its all full of exploits or in most cases has zero security measures what so ever. They all sunk money into something that has yet to pay out. Just go on linkedin and see all the tech bros desperately trying to save their own asses right now.
the bubble is bursting.
The folks I know at both OpenAI and Anthropic don’t share your belief.
Also, anecdotally, I’m only seeing more and more push for LLM use at work.
that’s interesting in all honesty and I don’t doubt you. all I know is my bank account has been getting bigger within the past few months due to new work from clients looking to fix their AI problems.
I think you’re onto something where a lot of this AI mess is going to have to be fixed by actual engineers. If folks blindly copied from stackoverflow without any understanding, they’re gonna have a bad time and that seems equivalent to what we’re seeing here.
I think the AI hate is overblown and I tend to treat it more like a search engine than something that actually does my work for me. With how bad Google has gotten, some of these models have been a blessing.
My hope is that the models remain useful, but the bubble of treating them like a competent engineer bursts.
Agreed. I’m with you it should be treated as a basic tool not something that is used to actually create things which, again in my current line of work, is what many places have done. It’s a fantastic rubber duck. I use it myself for that purpose or even for tasks that I can’t be bothered with like creating README markdowns or commit messages or even setting up flakes and nix shells and stuff like that, creating base project structures so YOU can do the actual work and don’t have to waste time setting things up.
The hate can be overblown but I can see where it’s coming from purely because many companies have not utilized it as a tool but instead thought of it as a replacement for an individual.
At the risk of sounding like a tangent, LLMs’ survival doesn’t solely depend on consumer/business confidence. In the US, we are living in a fascist dictatorship. Fascism and fascists are inherently irrational. Trump, a fascist, wants to bring back coal despite the market natural phasing coal out.
The fascists want LLMs because they hate art and all things creative. So the fascists may very well choose to have the federal government invest in LLM companies. Like how they bought 10% of Intel’s stock or how they want to build coal powered freedom cities.
So even if there are no business applications for LLM technology our fascist dictatorship may still try to impose LLM technology on all of us. Purely out of hate for us, art and life itself. edit: looks like I commented this under my comment the first time
deleted by creator
being anti-plastic is making me feel like i’m going insane. “you asked for a coffee to go and i grabbed a disposable cup.” studies have proven its making people dumber. “i threw your leftovers in some cling film.” its made from fossil fuels and leaves trash everywhere we look. “ill grab a bag at the register.” it chokes rivers and beaches and then we act surprised. “ill print a cute label and call it recyclable.” its spreading greenwashed nonsense. little arrows on stuff that still ends up in the landfill. “dont worry, it says compostable.” only at some industrial facility youll never see. “i was unboxing a package” theres no way to verify where any of this ends up. burned, buried, or floating in the ocean. “the brand says advanced recycling.” my work has an entire sustainability team and we still stock pallets of plastic water bottles and shrink wrapped everything. plastic cutlery. plastic wrap. bubble mailers. zip ties. everyone treats it as a novelty. every treats it as a mandatory part of life. am i the only one who sees it? am i paranoid? am i going insane? jesus fucking christ. if i have to hear one more “well at least” “but its convenient” “but you can” im about to lose it. i shouldnt have to jump through hoops to avoid the disposable default. have you no principles? no goddamn spine? am i the weird one here?
#ebb rambles #vent #i think #fuck plastics im so goddamn tired
If plastic was released roughly two years ago you’d have a point.
If you’re saying in 50 years we’ll all be soaking in this bullshit called gen-AI and thinking it’s normal, well - maybe, but that’s going to be some bleak-ass shit.
Also you’ve got plastic in your gonads.
Yeah it was a fun little whataboutism. I thought about doing smartphones instead. Writing that way hurts though. I had to double check for consistency.
On the bright side we have Cyberpunk to give us a tutorial on how to survive the AI dystopia. Have you started picking your implants yet?
you asked for thoughts about your character backstory and i put it into chat gpt for ideas
If I want ideas from ChatGPT, I could just ask it myself. Usually, if I’m reaching out to ask people’s opinions, I want, you know, their opinions. I don’t even care if I hear nothing back from them for ages, I just want their input.
“I just fed your private, unpublished intellectual property into black box owned by billionaires. You’re welcome.”
No line breaks and capitalization? Can somebody ask AI to format it properly, please?
Everytime someone talks up AI, I point out that you need to be a subject matter expert in the topic to trust it because it frequently produces really, really convincing summaries that are complete and utter bullshit.
And people agree with me implicitly and tell me they’ve seen the same. But then don’t hesitate to turn to AI on subjects they aren’t experts in for “quick answers”. These are not stupid people either. I just don’t understand.
Hence the feeling of creeping insanity. Yeah.
Uses for this current wave of AI: converting machine language to human language. Converting human language to machine language. Sentiment analysis. Summarizing text.
People have way over invested in one of the least functional parts of what it can do because it’s the part that looks the most “magic” if you don’t know what it’s doing.
The most helpful and least used way of using them is to identify what information the user is looking for and then to point them to resources they can use to find out for themselves, maybe with a description of which resource might be best depending on what part of the question they’re answering.
It’s easy to be wrong when you’re answering a question, and a lot harder when you hand someone a book and say you think the answer is in chapter four.
Meanwhile every company finds out the week after they lay off everyone that the billions they poured into their shitty “AI” to replace them might as well have been put in bags and set on fire
Yes, you’re the weird one. Once you realize that 43% of the USA is FUNCTIONALLY ILLITERATE you start realizing why people are so enamored with AI. (since I know some twat is gonna say shit: I’m using the USA here as an example, I’m not being us-centric)
Our artificial intelligence, is smarter than 50% of the population (don’t get started on ‘hallucinations’…do you know how many hallucinations the average person has every day?!) – and is stupider than the top 20% of the population.
The top 20%, wonder if everyone has lost their fucking minds, because to them it looks like it is completely worthless.
It’s more just that the top 20% are naive to the stupidity of the average person.
I have to say, I don’t agree with some of your other points elsewhere here, but this makes a lot of sense.
The Luddites were right. Maybe we can learn a thing or two from them…
I had to down load the facebook app to delete my account. Unfortunately I think the Luddites are going to be sent to the camps in a few years.
They can try, but Papa Kaczynski lives forever in our hearts.
deleted by creator
The data centers should not be safe for much longer. Especially once they use up the water of their small towns nearby
I feel this
Yeah. But then being a vertibrate is always lonely and kinda rough.
always lonely
I don’t know, some rodents seem to make it work. Naked mole rats, beavers, prairie dogs… (I wouldn’t include herd animals, though; sure, they’re always surrounded by others, but there’s no sense of community, it’s always everyone for themselves, and screw whoever’s slowest… perfect example of being alone in a multitude)
The way I look at it is that I haven’t heard anything about NFTs in a while. The bubble will burst soon enough when investors realize that it’s not possible to get much better without a significant jump forward in computing technology.
We’re running out of atomic room to make thing smaller just a little more slowly than we’re running out of ways to even make smaller things, and for a computer to think like, as well as as quickly or faster than a person we need processing power to continue to increase exponentially per unit of space. Silicon won’t get us there.
This is a good take for a lot of reasons.
In part because NFTs are still used and have some interesting applications, but 90% of the marketing and use cases were companies trying to profit from the hype train.