The risk of LLMs aren’t on what it might do. It is not smart enough to find ways to harm us. The risk seems from what stupid people will let it do.
If you put bunch of nuclear buttons in front of a child/monkey/dog whatever, then it can destroy the world. That seems to be what’s LLM problem is heading towards. People are using it to do things that it can’t, and trusting it because AI has been hyped so much throughout our past.
The risk of LLMs aren’t on what it might do. It is not smart enough to find ways to harm us. The risk seems from what stupid people will let it do.
If you put bunch of nuclear buttons in front of a child/monkey/dog whatever, then it can destroy the world. That seems to be what’s LLM problem is heading towards. People are using it to do things that it can’t, and trusting it because AI has been hyped so much throughout our past.