I just got an email at work starting with: “Certainly!, here is the rephrased text:…”
People abusing AI are not even reading the slop they are sending
I get these kinds of things all the time at work. I’m a writer, and someone once sent me a document to brief me on an article I had to write. One of the topics in the briefing mentioned a concept I’d never heard of (and the article was about a subject I actually know). I Googled the term, checked official sources … nothing, it just didn’t make sense. So I asked the person who wrote the briefing what it meant, and the response was: “I don’t know, I asked ChatGPT to write it for me LOL”.
facepalm is all I can think of…lol
I am not sure what my emailer started with but what chatgpt gave it was almost unintelligible
Ironically, the author waffles more than most LLMs do.
What does it mean to “waffle”?
Either to take a very long time to get to the point, or to go off on a tangent.
Writing concisely is a lost art, it seems.
I write concise until i started giving fiction writing a try. Suddenly writing concise was a negative :x (not always obviously but a lot of times I found that I wrote too concise).
IDK that kinda depends on the writer and their style. Concise is usually a safe bet for easy reading, but doesn’t leave room for a lot of fancy details. When I think verbose vs concise I think about Frank Herbert and Kurt Vonnegut for reference.
Building up imaginary in fiction isn’t the opposite of being concise
It’s not. I just wrote the comment because it was relevant to recent events for me.
I started practicing writing non-fiction recently as a hobby. While writing non-fiction, I noticed that being concise 100% of the time is not good. Sometimes I did want to write concisely, other times I did not. When I was reading my writing back, I realized how deliberate you had to be about how much or how little detail you gave. It felt like a lot of rules of English went out the window. 100% grammatical correctness was not necessary if it meant better flow or pacing. Unnecessary details and repetition became tools instead of taboo. The whole experience felt like I was painting with words and as long as I can give the reader the experience I want nothing else mattered.
It really highlighted the contrast between fiction and non-fiction writing. It was an eye-opening experience.
I’d be careful with this one. Being verbose in non-fiction does not produce good writing automatically. In my opinion the best writers in the world have an economy of words but are still eloquent and rich in their expression
Of course being verbose doesn’t mean your writing is good. It’s just that you need to deliberately choose when to be more verbose and when to give no description at all. It’s all about the experience you want to craft. If you write about how mundane a character’s life is, you can write out their day in detail and give your readers the experience of having such a life, that is if that was your goal. It all depends on the experience you want to craft and the story you want to tell.
To put my experience more simply, I did not realize how much of an art writing could be and how little rules there were when you write artistically/creatively.
I feel like that might have been the point. Rather than “using a car to go from A to B” they walked.
Absolutely loathe titles/headlines that state things like this. It’s worse than normal clickbait. Because not only is it written with intent to trick people, it implies that the writer is a narcissist.
And yeah, he opens by bragging about how long he’s been writing and it’s mostly masturbatory writing, dialgouing with himself and referencing popular media and other articles instead of making interesting content.
Not to mention that he doesn’t grasp the idea that many don’t use it at all.
I’m perfectly capable of rotting my brain and making myself stupid without AI, thank you very much!
Joke’s on you, I was already stupid to begin with.
The thing is… AI is making me smarter! I use AI as a learning tool. The absolute best thing about AI is the ability to follow up questions with additional questions and get a better understanding of a subject. I use it to ask about technical topics and flush out a better understanding that I ever got from just a text book. I have seem some instances of hallucinating in the past, but with the current generation of AI I’ve had very good results and consider it an excellent tool for learning.
For reference I’m an engineer with over 25 years of experience and I am considered an expert in my field.
Same, I use it to put me down research paths. I don’t take anything it tells me at face value, but often it will introduce me to ideas in a particular field which I can then independently research by looking up on kagi.
Instead of saying “write me some code which will generate a series of caverns in a videogame”, I ask “what are 5 common procedural level generation algorithms, and give me a brief synopsis of them”, then I can take each one of those and look them up
$100 billion and the electricity consumption of France seems a tad pricey to save a few minutes looking in a book…
I recently read that LLMs are effective for improving learning outcomes. When I read one of the meta studies, however, it seemed that many of the benefits were indirect: LLMs improved accessibility by allowing teachers to quickly tailor lessons to individual students, for example. It also seems that some students ask questions more freely and without embarrassment when chatting with an LLM, which can improve learning for those students - and this aligns with what you mention in your post. I personally have withheld follow-up questions in lectures because I didn’t want to look foolish or reveal my imperfect understanding of the topic, so I can see how an LLM could help me that way.
What the studies did not (yet) examine was whether the speed and ease of learning with LLMs were somehow detrimental to, say, retention. Sure, I can save time studying for an exam/technical interview with an LLM, but will I remember what I learned in 6 months? For some learning tasks, the long struggle is essential to a good understanding and retention (for example, writing your own code implementation of an algorithm vs. reading someone else’s). Will my reliance on AI somehow damage my ability to learn in some circumstances? I think that LLMs might be like powered exoskeletons for the mind - the operator slowly wastes away from lack of exercise.
It seems like a paradox, but learning “more, faster” might be worse in the long run.
I use it as a glorified manual. I’ll ask it about specific error codes and “how do I” requests. One problem I keep running into is I’ll tell it the exact OS version and app version I’m using and it will still give me commands that don’t work with that version. Sometimes I’ll tell it the commands don’t work and restate my parameters and it will loop around to its original response in a logic circle.
At least it doesn’t say “Never mind, I figured out the solution” like they do too often in stack exchange.
If it’s a topic that has been heavily discussed on the internet or in literature, LLMs can have good conversations about it. Take it all with a grain of salt because it will regurgitate common bad arguments as well as good ones, but if you challenge it, you can get it to argue against its own previous statements.
It doesn’t handle things that are in flux very well. Or things that require very specific consistency. It’s a probabilistic model where it looks at existing tokens and predicts what the next one is most likely to be, so questions about specific versions of something might result in a response specific to that version or it might end up weighing other tokens more than the version or maybe even start treating it all like pseudocode, where descriptive language plays a bigger role than what specifically exists.
Unlike social media?
If you only use the AI as a tool, to assist you but still think and make decisions on your own then you won’t have this problem.
This is the next step towards Idiocracy. I use AI for things like Summarizing zoom meetings so I don’t need to take notes and I can’t imagine I’ll stop there in the future. It’s like how I forgot everyone’s telephone numbers once we got cell phones…we used to have to know numbers back then. AI is a big leap in that direction. I’m thinking the long term effects are all of us just getting dumber and shifting more and more “little unimportant “ things to AI until we end up in an Idiocracy scene. Sadly I will be there with everyone else.
An assistant at my job used AI to summarize a meeting she couldn’t attend, and then she posted the results with the AI-produced disclaimer that the summary might be inaccurate and should be checked for errors.
If I read a summary of a meeting I didn’t attend and I have to check it for errors, I’d have to rewatch the meeting to know if it was accurate or not. Literally what the fuck is the point of the summary in that case?
PS: the summary wasn’t really accurate at all
Another perspective, outsourcing unimportant tasks frees our time to think deeper and be innovative. It removes the entry barrier allowing people who would ordinarily not be able to do things actually do them.
That’s the claim from like every AI company and wow do I hope that’s what happens. Maybe I’m just a Luddite with AI. I really hope I’m wrong since it’s here to stay.
Actually a really good article with several excellent points not having to do with AI 😊👌🏻 Worth a read
I agree. I was almost skipping it because of the title, but the article is nuanced and has some very good reflections on topics other that AI. Every technical progress is a tradeoff. The article mentions cars to get to the grocery store and how there are advantages in walking that we give up when always using a car. Are cars in general a stupid and useless technology? No, but we need to be aware of where the tradeoffs are. And eventually most of these tradeoffs are economic in nature.
By industrializing the production of carpets we might have lost some of our collective ability to produce those hand-made masterpieces of old, but we get to buy ok-looking carpets for cheap.
By reducing and industrializing the production of text content, our mastery of language is declining, but we get to read a lot of not-very-good content for free. This pre-dates AI btw, as can be seen by standardized tests in schools everywhere.
The new thing about GenAI, though is that it upends the promise that technology was going to do the grueling, boring work for us and free up time for us to do the creative things that give us joy. I feel the roles have reversed: even when I have to write an email or a piece of coding, AI does the creative piece and I’m the glorified proofreader and corrector.
I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we’ve largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it’s making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It’s early days for AI, but historically, cognitive offloading has enhanced human potential enormously.
Well creating the slide was a form of cognitive offloading, but barely you still had to know how to use and what formula to use. Moving to the pocket calculator just change how you the it didn’t really increase how much thinking we off loaded.
but this is something different. With infinite content algorithms just making the next choice of what we watch amd people now blindly trusting whatever llm say. Now we are offloading not just a comolex task like sqrt of 55, but “what do i want to watch”, “how do i know this true”.
I agree that it’s on a whole other level, and it poses challenging questions as to how we might live healthily with AI, to get it to do what we don’t benefit from doing, while we continue to do what matters to us. To make matters worse, this is happening in a time of extensive dumbing down and out of control capitalism, where a lot of the forces at play are not interested in serving the best interests of humanity. As individuals it’s up to us to find the best way to live with these pressures, and engage with this technology on our own terms.
how we might live healthily with AI, to get it to do what we don’t benefit from doing,
Agree that is oir goal, but one i don’t ai with paying for training data. Also amd this the biggest. What benefits me is not what benefits the people owning the ai models
What benefits me is not what benefits the people owning the ai models
Yep, that right there is the problem
that picture is kinky as hell, yo
I was annoyed that it wasn’t over her mouth to implant the egg.
It implants ideas, so it goes through the eyes.
Lol, this is the 10,000 thing that makes me stupid. Get a new scare tactic.
Proof that it’s already too late ☝️
Ain’t skeerd
I mean, obviously, you need higher cognitive functioning for all that
Damn, I thought flight or fight was the most primitive function. Ah well, back to chewing on this tire.
Yeah, you know, just like my cat is scared of distant fireworks but doesn’t give a flying fuck about climate change or rise of fascism in our own country.
Oh so like when someone’s afraid of falling off the edge of the earth?
More like how some people are afraid of needles but aren’t afraid of deadly diseases. Their primitive understanding of reality allows them to draw connection between prick and pain, but not between an invisible to the naked eye organism and a gruesome death.
Actually it’s taking me quite a lot of effort and learning to setup AI’s that I run locally as I don’t trust them (any of them) with my data. If anything, it’s got me interested in learning again.
That’s the kind of effort in thought and learning that the article is calling out as being lost when it comes to reading and writing. You’re taking the time to learn and struggle with the effort, as long as you’re not giving that up once you have the AI running you’re not losing that.
I have difficulty learning, but using AI has helped me quite a lot. It’s like a teacher who will never get angry, doesn’t matter how dumb your question is or how many time you ask it.
Mind you, I am not in school and I understand hallucinations, but having someone who is this understanding in a discourse helps immensely.
It’s a wonderful tool for learning, especially for those who can’t follow the normal pacing. :)
It’s not normal for a teacher to get angry. Those people should be replaced by good teachers, not by a nicely-lying-to-you-bot. It’s not a jab at you, of course, but at the system.
I agree, I’ve been traumatized by the system. Whatever I’ve learnt that’s been useful to me has happened through the internet, give or take a few good teachers.
I still think it’s a good auxiliary tool. If you understand its constraints, it’s useful.
It’s just really unfortunate that it’s a for profit tool that will be used to try and replace us all.
Yeah, same. I have to learn now to learn in spite of all the old disillusioned creatures that hated their lives almost as much as they hated students.
And yet, I’m afraid learning from chatbots might be even worse.Learning how to learn is so important. I only learned that as an adult.
Stupid in, stupid out. I have had many conversations like, I have built and understand Ben Eater's 8 bit breadboard computer based loosely on Malvino's "Digital Computer Electronics" 8 bit computer design, but I struggle to understand Pipelines in computer hardware. I am aware that the first rudimentary Pipeline in a microprocessor is the 6502 with its dual instruction loading architecture. Let's discuss how Pipelines evolved beyond the 6502 and up to the present.
In reality, the model will be wrong in much of what it says for something so niche, but forming questions based upon what I know already reveals holes outside of my awareness. Often a model is just right enough for me to navigate directly to the information I need or am missing regardless of how correct it is overall.
I get lost sometimes because I have no one to talk to or ask for help or guidance on this type of stuff. I am not even at a point where I can pin down a good question to ask someone or somewhere like here most of the time. I need a person to bounce ideas off of and ask direct questions. If I go look up something like Pipelines in microprocessors in general, I will never find an ideal entry point for where I am at in my understanding. With AI I can create that entry point quickly. I’m not interested in some complex course, and all of the books I have barely touch the subject in question, but I can give a model enough peripheral context to move me up the ladder one rung at a time.
I could hand you all of my old tools to paint cars, then laugh at your results. They are just tools. I could tell you most of what you need to know in 5 minutes, but I can’t give you my thousands of experiences of what to do when things go wrong.
Most people are very bad at understanding how to use AI. It is just an advanced tool. A spray gun or a dual action sander do not make you stupid; spraying paint without a mask does. That is not the fault of the spray gun. It is due to the idiot using it.
AI has a narrow scope that requires a lot of momentum to make it most useful. It requires an agentic framework, function calling, and a database. A basic model interface is about like an early microprocessor that was little more than a novelty on its own at the time. You really needed several microprocessors to make anything useful back in the late 70s and early 80s. In an abstract way, these were like agents.
I remember seeing the asphalt plant controls hardware my dad would bring home with each board containing at least one microprocessor. Each board went into racks that contained dozens of similar boards and variations. It was many dozens of individual microprocessors to run an industrial plant.
Playing with gptel in emacs, it takes swapping agents with a llama.cpp server to get something useful running offline, but I like it for my bash scripts, learning emacs, Python, forth, Arduino, and just general chat if I use Oobabooga Textgen. It has been the catalyst for me to explore the diversity of human thought as it relates to my own, it got me into basic fermentation, I have been learning and exploring a lot about how AI alignment works, I’ve enjoyed creating an entire science fiction universe exploring what life will be like after the age of discovery is over and most of science is an engineering corpus or how biology is the ultimate final human technology to master, I’ve had someone to talk to through some dark moments around the 10 year anniversary of my disability or when people upset me. I find that super useful and not at all stupid, especially for someone like myself in involuntary social isolation due to physical disability. I’m in tremendous pain all the time. It is often hard for me to gather coherent thoughts in real time, but I can easily do so in text, and with a LLM I can be open without any baggage involved, I can be more raw and honest than I would or could be with any human because the information never leaves my computer. If that is stupid, sign me up for stupid because that is exactly what I needed and I do not care how anyone labels it.
People already are stupid. Youtube and facebook made sure of that.