That is hilarious. I know this would be giving the producers too much credit here but I wish it was malicious compliance on the part. They could have copied any of the thousands of workout/food plans that already exist but chose to use what I’m sure is chatgpt’s actual garbage reply.
[Edit] OK I’m a hater but I need to take that last part back. I asked chatgpt the same question as the video and…it gave me a great reply? A decent lifting routine with explanations, and “bonus tips” on protein/sleep. And then at the end:
If you tell me:
whether you have gym access or home setup
your current pull-up ability (can you hang? partial pull-up? full one?)
and how often you can train
…I’ll tailor this into a week-by-week plan with exact sets/reps and progression tracking.
Wanna do that?
So credit where credit’s due, I guess. The workout routine isn’t anything special but I see the appeal with the followup questions
Mind you, LLMs can be quite inconsistent. If you repeat the same question in new chats, you can easily get a mix of good answers, bad answers, bafflingly insane answers, and “I’m sorry but I cannot support terrorism”.
Yeah, that is definitely true. I just wanted to take back my knee jerk reaction of “OF COURSE CHATGPT WOULD SPIT OUT SOMETHING SO STUPID” after being proven wrong.
I tried it both on my phone and work PC and got different but equally decent replies. Nothing you couldn’t find elsewhere without boiling the planet, though
That’s the true danger of the tools, imo. They aren’t the digital All Seers their makers want to market them as, but they also aren’t the utterly useless slop machines the consensus on the Fediverse appears to vehemently be.
Of course they’re somewhere in-between. Spicy auto complete sounds like a put down but there is spice. I prefer to think of LLMs as word calculators, and I do mean that I have found them analogous in the sense that if you approach them with a similar plan of action as you would an actual calculator, you’ll get similar results.
Difference is of course that math is not fuzzy and open to nuance, and 1+1 will always equal 2.
But language isn’t math, and it’s trickier to get a sense for what the “equations” that will yield results are, so it’s easy to disparage the technology as a concept given (as you rightly point out) the boiling of the planet.
Work has been insisting we tool with an LLM and they’re checking but thankfully my role doesn’t require relying on any facts the machine spits out.
Which is another part of why the technology is so reviled/misunderstood; the part of the LLM that determines the next word isn’t and can’t judge the veracity of it’s output. Any landing on factual info is either a coincidence or the fact that you as the user knew to “coach” things in such a way as to arrive as the most likely output which the user already knew is correct.
Because of the uncertainty, it is simply unwise to take any LLM’s output as factual, as any fact checking capacity isn’t innate but other operations being done on the output, if any.
Then there’s all the other reasons to hate the things like who makes them, how they’re made, how they’re wielded, etc, and I frankly can’t blame anyone for vehemently hating LLM’s as a concept.
But it’s disingenous to think the tech is wholly incapable of anything of merit to anyone and only idiots out there are using it (even if that may still be often the case).
Or put another way: A sailor hidden within the ship’s hold will still drown and die alongside the ones that don’t resist the siren call above and pilot straight for the rocks.
Cool stuff, deployed in maximally foolish ways. I think I rambled a bit there, but hopefully a bit of my point made it across.
That is hilarious. I know this would be giving the producers too much credit here but I wish it was malicious compliance on the part. They could have copied any of the thousands of workout/food plans that already exist but chose to use
what I’m sure is chatgpt’s actual garbage reply.[Edit] OK I’m a hater but I need to take that last part back. I asked chatgpt the same question as the video and…it gave me a great reply? A decent lifting routine with explanations, and “bonus tips” on protein/sleep. And then at the end:
So credit where credit’s due, I guess. The workout routine isn’t anything special but I see the appeal with the followup questions
Mind you, LLMs can be quite inconsistent. If you repeat the same question in new chats, you can easily get a mix of good answers, bad answers, bafflingly insane answers, and “I’m sorry but I cannot support terrorism”.
Yeah, that is definitely true. I just wanted to take back my knee jerk reaction of “OF COURSE CHATGPT WOULD SPIT OUT SOMETHING SO STUPID” after being proven wrong.
I tried it both on my phone and work PC and got different but equally decent replies. Nothing you couldn’t find elsewhere without boiling the planet, though
That’s the true danger of the tools, imo. They aren’t the digital All Seers their makers want to market them as, but they also aren’t the utterly useless slop machines the consensus on the Fediverse appears to vehemently be.
Of course they’re somewhere in-between. Spicy auto complete sounds like a put down but there is spice. I prefer to think of LLMs as word calculators, and I do mean that I have found them analogous in the sense that if you approach them with a similar plan of action as you would an actual calculator, you’ll get similar results.
Difference is of course that math is not fuzzy and open to nuance, and 1+1 will always equal 2.
But language isn’t math, and it’s trickier to get a sense for what the “equations” that will yield results are, so it’s easy to disparage the technology as a concept given (as you rightly point out) the boiling of the planet.
Work has been insisting we tool with an LLM and they’re checking but thankfully my role doesn’t require relying on any facts the machine spits out.
Which is another part of why the technology is so reviled/misunderstood; the part of the LLM that determines the next word isn’t and can’t judge the veracity of it’s output. Any landing on factual info is either a coincidence or the fact that you as the user knew to “coach” things in such a way as to arrive as the most likely output which the user already knew is correct.
Because of the uncertainty, it is simply unwise to take any LLM’s output as factual, as any fact checking capacity isn’t innate but other operations being done on the output, if any.
Then there’s all the other reasons to hate the things like who makes them, how they’re made, how they’re wielded, etc, and I frankly can’t blame anyone for vehemently hating LLM’s as a concept.
But it’s disingenous to think the tech is wholly incapable of anything of merit to anyone and only idiots out there are using it (even if that may still be often the case).
Or put another way: A sailor hidden within the ship’s hold will still drown and die alongside the ones that don’t resist the siren call above and pilot straight for the rocks.
Cool stuff, deployed in maximally foolish ways. I think I rambled a bit there, but hopefully a bit of my point made it across.