Artificial Generalized Incompetence
I mean, I’m not going to spend time trying to duplicate their results, but it wouldn’t even slightly surprise me. Cops have been using ChatGPT to streamline their bullshit cop-lingo incident reports, to the extent that it’s caught the notice of lawyers and judges… 100% I believe that the dolts who shit out Trump’s tariff rates used it too.
There’s a ton of papers on Google Scholar that still include phases like “Let’s delve into…” That show otnwas used not to translate, but for the research itself.
And someone did replicate this, and ChatGPT 4o, o1, Claude and Grok all came up with the same formula for an “easy” way to calculate tariffs.
And someone did replicate this
Can you recall who?
Thanks, much appreciated.
The United States of America. A nation ruled by word salad.
How about the outlet checks and finds out?
I did, and I couldn’t get low-temperature Gemini or a local LLM to replicate it, and not all the tariffs seem to be based on the trade deficit ratio, though some suspiciously are.
Sorry, but this is a button of mine, outlets that ask stupidly easy to verify questions but dont even try. No, just cite people on Reddit and Twitter…
They tariffed places with no people in them.
though some suspiciously are.
Some? A huge portion are. Numerous others have replicated it with visual proof. I agree that the news sites should be verifying it, but NYT did and also documented their proof.
Because the article is likely just more GenAI vomit, and an LLM doesn’t have any degree of deductive reasoning ability to begin with.
TBH it’s probably human written.
I used to write small articles for a tech news outlet on the side (HardOCP), and the entire site went under well before the AI boom because no one can compete with conveyer belts of of thoughtless SEO garbage, especially when Google promotes it.
Point being, this was a problem well before the rise of LLMs.
Did ChatGPT come up with the color of the sky? AI chatbots ChatGPT, Gemini, Claude and Grok all return the same color for the sky, several X users claim.
The sky color is part of the training data. How did the LLMs include the training data before it existed?
All the search engines search the same internet, find similar text, output it using similar formulas.
Actually, it was the Palantir Gotham threat model… which has a backend to a private chatgpt model :(
I tried replicating this myself, and got no similar results. It took enough coaxing just to get the model to not specify existing tariffs, then to make it talk about entire nations instead of tariffs on specific sectors, then after that it mostly just did 10, 12, and 25% for most of the answers.
I have no doubt this is possible, but until I see some actual amount of proof, this is entirely hearsay.