AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-216 hours agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square61fedilinkarrow-up1486file-text
arrow-up1486external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-216 hours agomessage-square61fedilinkfile-text
minus-squareEl Barto@lemmy.worldlinkfedilinkEnglisharrow-up16·1 day agoLLMs deal with tokens. Essentially, predicting a series of bytes. Humans do much, much, much, much, much, much, much more than that.
minus-squareZexks@lemmy.worldlinkfedilinkEnglisharrow-up1·19 hours agoNo. They don’t. We just call them proteins.
minus-squarestickly@lemmy.worldlinkfedilinkEnglisharrow-up3·18 hours agoYou are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.
LLMs deal with tokens. Essentially, predicting a series of bytes.
Humans do much, much, much, much, much, much, much more than that.
No. They don’t. We just call them proteins.
You are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek’s Four Humours theory.
“They”.
What are you?