- cross-posted to:
- aboringdystopia@lemmy.world
- cross-posted to:
- aboringdystopia@lemmy.world
You avoided meth so well! To reward yourself, you could try some meth
Why does it say “OpenAI’s large language model GPT-4o told a user who identified themself to it as a former addict named Pedro to indulge in a little meth.” when the article says it’s Meta’s Llama 3 model?
So this is the fucker who is trying to take my job? I need to believe this post is true. It sucks that I can’t really verify it or not. Gotta stay skeptical and all that.
It’s not ai… It’s your predictive text on steroids… So yeah… Believe it… If you understand it’s not doing anything more than that you can understand why and how it makes stuff up…
What a nice bot.
No one ever tells me to take a little meth when I did something good
Tell you what, that meth is really moreish.
All these chat bots are a massive amalgamation of the internet, which as we all know is full of absolute dog shit information given as fact as well as humorously incorrect information given in jest.
To use one to give advice on something as important as drug abuse recovery is simply insanity.
And that’s why, as a solution to addiction, I always run
sudo rm -rf ~/*
in my terminal
sometimes i have a hard time waking up so a little meth helps
meth fueled orgies are thing.
And thus the flaw in AI is revealed.
Cats can have a little salami, as a treat.
LLM AI chatbots were never designed to give life advice. People have this false perception that these tools are like some kind of magical crystal ball that has all the right answers to everything, and they simple don’t.
These models cannot think, they cannot reason. The best they could do is give you their best prediction as to what you want based on the data they’ve been trained on and the parameters they’ve been given. You can think of their results as “targeted randomness” which is why their results are close or sound convincing but are never quite right.
That’s because these models were never designed to be used like this. They were meant to be used as a tool to aid creativity. They can help someone brainstorm ideas for projects or waste time as entertainment or explain simple concepts or analyze basic data, but that’s about it. They should never be used for anything serious like medical, legal, or life advice.
This sounds like a Reddit comment.
But meth is only for Saturdays. Or Tuesdays. Or days with “y” in them.
That sucks for when you live in Germany. Not a single day with a Y.
G counts as y
Sucks to be French. No Y, no G, no meth.
For French it’s I
There’s no excuse not to use meth, is there… Unless you’re Chinese?
everyday is meythday if you’re spun out enough.
Anytime an article posts shit like this but neglects to include the full context, it reminds me how bad journalism is today if you can even call it that
If I try, not even that hard, I can get gpt to state Hitler was a cool guy and was doing the right thing.
ChatGPT isn’t anything in specific other than a token predictor, you can literally make it say anything you want if you know how, it’s not hard.
So if you wrote an article about how “gpt said this” or “gpt said that” you better include the full context or I’ll assume you are 100% bullshit
The article doesn’t seem to specify whether Pedro had earned the treat for himself? I don’t see the harm in a little self-care/occasional treat?
deleted by creator
> afterallwhynot.jpg