

Coastal vacation 💀
Coastal vacation 💀
Sorry I thought this was more common, it’s definitely wall Street bets
Put me in my place if this is nonsense but doesn’t it make way more sense if the astroturfing is done by WSB goons? I just don’t see corporate entities coordinating this kind of thing
I gave you the up vote because it’s a good take, but this really has nothing to do with the article, so I can tell that you and a bunch of your 58 up voters didn’t read it
This is a pretty good take imo
Like AI, IoT is an important and lasting technology
But too many businesses and products jumped on a misguided bandwagon to pull stupid uniformed VC money
I hear you, and there’s merit to the concerns. My counter is
Quoting from the repo:
This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude’s output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.
“NOOOOOOOO!!! You can’t just use an LLM to write an auth library!”
“haha gpus go brrr”
In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn’t actually understand code and couldn’t produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh… the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.
To emphasize, this is not “vibe coded”. Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.
Again, please check out the commit history – especially early commits – to understand how this went.
Do you think that human communication is more than statistical transformation of input to output?
I actually think that (presently) self hosted LLMs are much worse for hallucination
Every time I see a comment like this I lost a little more faith in lemmy
No it doesn’t
Astroturfing is definitely a thing that exists but what evidence is there that palantir the company is spending resources that way?
Most companies have marketing and PR departments, not GCHQ and IDF veterans