Yeah it’s just token prediction all the way down. Asking it repeatedly to not do something might have even made it more likely to predict tokens that would do that thing.
Oh ha ha, so it’s like a toddler? You have to be very careful not to tell toddlers NOT to do a thing, because they will definitely do that thing. “Don’t touch the hot pan.” Toddler touches the hot pan.
The theory is that they don’t hear the word “don’t”, just the subsequent command. My theory is that the toddler brain goes, “why?” and proceeds to run a test to find out.
I understand where you’re coming from, but I don’t agree it’s about semantics; it’s about devaluation of communication. LLMs and their makers threaten that in multiple ways. Thinking of it as “lying” is one of them.
OK sure. I was just using the wording from the article to make a point, I wasn’t trying to get into a discussion about whether “lying” requires intent.
You probably should have used semantics to communicate if you wanted your semantics to be unambiguous. Instead you used mere syntax and hoped that the reader would assign the same semantics that you had used. (This is apropos because language models also use syntax alone and have no semantics.)
That or the company selling the AI (well, all of them) have pushed their product with the messaging that it’s trustworthy enough to be used recklessly.
Train on human data and you receive human behavior and speech patterns. Lying or not it leads people to be deceived in a very insidious way.
Yeah NO FUCKING SHIT THAT IS LITERALLY WHAT THEY DO
You can only lie if you know what’s true. This is bullshitting all the way down that sometines happens to sound true, sometimes it doesn’t.
Yeah it’s just token prediction all the way down. Asking it repeatedly to not do something might have even made it more likely to predict tokens that would do that thing.
Oh ha ha, so it’s like a toddler? You have to be very careful not to tell toddlers NOT to do a thing, because they will definitely do that thing. “Don’t touch the hot pan.” Toddler touches the hot pan.
The theory is that they don’t hear the word “don’t”, just the subsequent command. My theory is that the toddler brain goes, “why?” and proceeds to run a test to find out.
In either scenario, screaming ensues.
Don’t really care to argue about the semantics. It’s clear what I meant.
To you I’m sure it is crystal clear. I’m just on the other end of communication.
I understand where you’re coming from, but I don’t agree it’s about semantics; it’s about devaluation of communication. LLMs and their makers threaten that in multiple ways. Thinking of it as “lying” is one of them.
OK sure. I was just using the wording from the article to make a point, I wasn’t trying to get into a discussion about whether “lying” requires intent.
You probably should have used semantics to communicate if you wanted your semantics to be unambiguous. Instead you used mere syntax and hoped that the reader would assign the same semantics that you had used. (This is apropos because language models also use syntax alone and have no semantics.)
That or the company selling the AI (well, all of them) have pushed their product with the messaging that it’s trustworthy enough to be used recklessly.
Train on human data and you receive human behavior and speech patterns. Lying or not it leads people to be deceived in a very insidious way.