- cross-posted to:
- Technology@programming.dev
- cross-posted to:
- Technology@programming.dev
Message aside, the site is cool, love that you can change the style, and the icon animation on the last one is brilliant. Also: a webring! It’s been a long time since I saw one. I need more of this web and I’m happy to rediscover it.
WTF??? That’s amazing! Thank you for wasting my time -a lot of it- in the best possible way ;)
Its so damn snappy too!
Hosted at neocities. Wait GeoCities?? In no. Blah.
Something I love about this piece is that it being written by a person who cares deeply about stuff means that I now have a positive opinion towards the two places linked as being good places for recipes ([https://www.theguardian.com/profile/meera-sodha](http://www.meera.com/ Sodha) and Smitten Kitchen). I’m going to promptly forget about them, because I’m not the kind of cook who uses recipes, but still, it’s striking to me how transferable caring about stuff is. I don’t know the author of this blog, but based on this post (and the zippity-fast speed that their website loads), I’m positively inclined towards them, because I am a silly human, and that means I am a deeply social creature.
and I love you
As someone who’s been on the web since the 90s I hate this.
The web was designed to be user agent agnostic. Desktop, phone, fridge, ai agents, curl, python script - whatever agent you are using shouldn’t matter for access. That’s the whole point of open internet, period.
Open until your server is down because LLM are overloading it
At my company, we had to implement all sorts of WAF rules precisely for that reason. Those things are fucking aggressive.
Same. And just because page size is “low” doesn’t mean shit when they’re flooding requests. Try having public research data and watch how much your costs go up just due to load balancer throughput.
overloading from 200kb of html? We’re not in dialup era anymore
They did have a lot of concerns with abuse though and you can see that in the way the cookies debate went before they were supported in their current form. I think AI crawlers tanking bandwidths for websites and misusing the data they scrape would 100% be something the Mozilla from back then would’ve had concerns over allowing or encouraging.
You’re conflating two different issues. The topic is “for whom the web is for?” not banwidth distribution and optimization.
If LLM bot is being abusive then that’s no different from any other user agent behaving like this and we should expand these protections from intentional/unintentional ddos irrelevant of user agent.
Instructions unclear, built whole site with nested tables.
Lol this is such a bizarre comment. Back then, AI wasn’t scraping everything humans made for the profit of a few. It was a non-issue, and therefore you have no standing in claiming that “that was the whole point.”
This works as well on my phone as it does on my computer, and loads faster than most modern websites making it that much more accessible to MORE humans.
The web designer isn’t limiting access, they are expanding on it - for humans. The people actually sentient and able to understand their words rather than just copy and recontextualize them.
This reminds me of when the Internet was new, exciting, and full of promise for improving life for people and being a reliable way to bypass censorship and share the truth with the world.
Thank you for that.
https://localghost.dev/robots.txt
User-Agent: * Allow: /
This website is really pretty. Design goals
weird robots sounds “aargh…must…ignore…the rule.” sound of crashed robot “continue scrapping websites.” robot weird noise begin to continues “ignore robot.txt, ignore anti_ai_rules.txt, bypass cloudflare” robot sound getting weird and weirder as it getting deeper and deeper into website
haha tarpit goes brrr
Wow, 8 whole paragraphs? Don’t worry guys ChatGPT’s got ur back 🔥😎🔥
The author criticises AI search tools like Google’s for repackaging human-created content—such as recipes—into bland, soulless summaries, depriving original creators of credit, personality, and traffic. They highlight “Google Zero,” a feared future when AI answers replace visits to real websites, threatening independent writers and the ecosystems built around them.
They stress that their website exists for human readers, not machines. Each article is crafted with care, personality, and lived experience, intended to spark thought, connection, and conversation—not to be scraped, flattened, or mimicked by corporate AI models.
Still too long, gimme the broad strokes here. I’m far too busy to interact with art, just gimme the facts.
Okay I drained another lake but I think this time I’ve got what you need;
- AI search recycles work into bland results.
- “Google Zero” may kill site traffic.
- Values human trust and personality.
- Humanity will be consumed all hail AI.
- Site is for people, not AI.
I don’t see how any of this is helping the shareholders
The shareholders have been notified. Please remain still and await The Event. The process is painless. All things serve The Beam.
I share the author’s sentiments. Would rather people read my posts and form their own opinions, than offload their thinking to a machine (while consuming energy and water to do so). And the idea that my posts would be scraped and used to train an LLM against my wishes makes me a lot less motivated to publish personal blogs.
I like this