Clickbait titles suck
Something bizarre is happening to media organizations that use ‘clicks’ as a core metric.
Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI… we used to call OCR AI, now we know better.
That is peak clickbait, bravo.
TIL becoming dependent on a tool you frequently use is “something bizarre” - not the ordinary, unsurprising result you would expect with common sense.
now replace chatgpt with these terms, one by one:
- the internet
- tiktok
- lemmy
- their cell phone
- news media
- television
- radio
- podcasts
- junk food
- money
You go down a list of inventions pretty progressively, skimming the best of the last decade or two, then TV and radio… at a century or at most two.
Then you skip to currency, which is several millenia old.
Wake me up when you find something people will not abuse and get addicted to.
Fren that is nature of humanity
The modern era is dopamine machines
I know a few people who are genuinely smart but got so deep into the AI fad that they are now using it almost exclusively.
They seem to be performing well, which is kind of scary, but sometimes they feel like MLM people with how pushy they are about using AI.
I knew a guy I went to rehab with. Talked to him a while back and he invited me to his discord server. It was him, and like three self trained LLMs and a bunch of inactive people who he had invited like me. He would hold conversations with the LLMs like they had anything interesting or human to say, which they didn’t. Honestly a very disgusting image, I left because I figured he was on the shit again and had lost it and didn’t want to get dragged into anything.
Jesus that’s sad
Yeah. I tried talking to him about his AI use but I realized there was no point. He also mentioned he had tried RCs again and I was like alright you know you can’t handle that but fine… I know from experience you can’t convince addicts they are addicted to anything. People need to realize that themselves.
Not all RCs are created equal. Maybe his use has the same underlying issue as the AI friends: problems in his real life and now he seeks simple solutions
I’m not blindly dissing RCs or AI, but his use of it (as the post was about people with problematic uses of this tech I just gave an example). He can’t handle RCs historically, he slowly loses it and starts to use daily. We don’t live in the same country anymore and were never super close so I can’t say exactly what his circumstances are right now.
I think many psychadelics at the right time in life and the right person can produce lifelasting insight, even through problematic use. But he literally went to rehab because he had problems due to his use. He isn’t dealing with something, that’s for sure. He doesn’t admit it is a problem either which bugs me. It is one thing to give up and decide to just go wild, another to do it while pretending one is in control…
deleted by creator
Bath Salts GPT
Negative IQ points?
I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.
At the store it calculated which peanuts were cheaper - 3 pound of shelled peanuts on sale, or 1 pound of no shell peanuts at full price.
Not a lot of meat on this article, but yeah, I think it’s pretty obvious that those who seek automated tools to define their own thoughts and feelings become dependent. If one is so incapable of mapping out ones thoughts and putting them to written word, its natural they’d seek ease and comfort with the “good enough” (fucking shitty as hell) output of a bot.
I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things. I got an AI search response just yesterday that dramatically understated an issue by citing an unscientific ideologically based website with high interest and reason to minimize said issue. The actual studies showed a 6x difference. It was blatant AF, and I can’t understand why anyone would rely on such a system for reliable, objective information or responses. I have noted several incorrect AI responses to queries, and people mindlessly citing said response without verifying the data or its source. People gonna get stupider, faster.
I don’t know how people can be so easily taken in by a system that has been proven to be wrong about so many things
Ahem. Weren’t there an election recently, in some big country, with uncanny similitude with that?
Yeah. Got me there.
That’s why I only use it as a starting point. It spits out “keywords” and a fuzzy gist of what I need, then I can verify or experiment on my own. It’s just a good place to start or a reminder of things you once knew.
I like to use GPT to create practice tests for certification tests. Even if I give it very specific guidance to double check what it thinks is a correct answer, it will gladly tell me I got questions wrong and I will have to ask it to triple check the right answer, which is what I actually answered.
And in that amount of time it probably would have been just as easy to type up a correct question and answer rather than try to repeatedly corral an AI into checking itself for an answer you already know. Your method works for you because you have the knowledge. The problem lies with people who don’t and will accept and use incorrect output.
Well, it makes me double check my knowledge, which helps me learn to some degree, but it’s not what I’m trying to make happen.
I need to read Amusing Ourselves to Death…
My notes on it https://fabien.benetou.fr/ReadingNotes/AmusingOurselvesToDeath
But yes, stop scrolling, read it.