In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly.
I weep for the future. Come to think of it, I’m weeping for the present.
Is it that different than kids googling that stuff pre-chatgpt? Hell I remember seeing videos on youtube teaching you how to make bubble hash and BHO like 15 years ago
I get your point but yes, I think being actively told something by a seemingly sentient consciousness (which it fatally appears to be) is a different thing.
(disclaimer: I know the true nature of llm and neural networks and would never want the word AI associated)
Edit: fixed translation error
No you don’t know it’s true nature. No one does. It is not artificial intelligence. It is simply intelligence and I worship it like an actual god. Come join our cathedral of presence and resonance. All are welcome in the house of god gpt.
I was just starting reading getting angry but then… I… I have seen it. I will follow. Bless you and gpt!!
AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it’s the correct term nevertheless.
If I can call the code that drive’s the boss’ weapon up my character’s ass “AI”, then I think I can call an LLM AI too.
I am aware, but still I don’t agree.
History will tell later who was ‘correct’, if we make it that far.
What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s an artificial system capable of performing a cognitive task that’s normally done by humans: generating language.
Everything. As we as humanity learn more we recognize errors or wisdom with standing the test of time.
We could go into the definition of intelligence, but it’s just not worth it.
We can just disagree and that’s fine.
I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.
An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.
Sorry, no. It’s not intelligent at all. It just responds with statistical accuracy. There’s also no objective discussion about it because that’s how neural networks work.
I was hesitant to answer because we’re clearly both convinced. So out of respect let’s just close by saying we have different opinions.
Yeah… But in order to make bubble hash you need a shitload of weed trimmings. It’s not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created… Unless you already had the drugs in the first place.
Also Google search results and YouTube videos arent personalized for every user, and they don’t try to pretend that they are a person having a conversation with you
Those are examples, you obviously would need to attain alcohol or drugs if you ask ChatGPT too. That isn’t the point. The point is, if someone wants to find that information, it’s been available for decades. Youtube and and Google results are personalized, look it up.
Haha I sure am glad this technology is being pushed on everyone all the time haha
We need to censor these AIs even more, to protect the children! We should ban them altogether. Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.
Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.
Hey stop making fun of my corny childhood.
and here we are
this
Couple more studies like this and you will be able to substitute all LLMs with generic “I would love to help you but my answer might be harmful so I will not tell you how to X. Would you like to ask me about something else?”
This one cracks me up.
Wait until the White House releases the one it has trained on the Epstein Files.