Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.
The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.
Small quantities of poisoned training data can significantly damage a language model.
The page also gives suggestions on how to put the provided resources to use.
I have around 10-20GB github / gitlab mirror. I am constantly under attack from crawlers from top US technology corporations and LLM startups. Whenever I ban one IP range they switch to other - I don’t know if those fuckers have tickets in their systems to do it manually or they just deploy this shit all over the planet. From what I observe during attacks that I mitigate the best way to poison them is to just create gitea instance with poisoned code repository and couple hundred revisions. It’s because what they are most interested in is html representation of diff between two git revisions.
Why isn’t there anything in the DMCA for stopping crawlers? They have stuff about requiring crawlers to follow attribution and whatnot, but nothing for not allowing crawlers in the first place. Stupid as shit.
I can get a 50Gb/s residential link where I am, and have a whole rack of servers.
Sounds like a good opportunity to crowd fund thousands and thousands of common scrapeable instances that have random poisoning.
To be honest bandwidth isn’t a problem because it’s text files. The problem is to optimize network stack for multiple connections because they’re hitting from whole subnets without any delay so literally ddos and cache those html files because at some point CPU becomes bottleneck.
This is assuming aggressively cached, yes.
Also “Just text files” is what every website is sans media. And you can still, EASILY get 10+ MB pages this way between HTML, CSS, JS, and JSON. Which are all text files.
A gitea repo page for example is 400-500KB transferred (1.5-2.5MB decompressed) of almost all text.
A file page is heavier, coming in around 800-1000KB (Additional JS and CSS)
If you have a repo with 150 files, and the scraper isn’t caching assets (many don’t) then you just served up 135MB of HTMl/CSS/JS alongside the actual repository assets.
I don’t know from theory or counting but I know that my 8 cores depleted sooner than my bandwidth and I have like 60 Mb/s uplink. My linux network stack parameters are pretty aggressive. The way I figured out that something is not right was when I heard loud fan noise from my server inside room. I logged in and all cores were red and logs were showing corporate fuckers trying to burn my house.
I assume that the gitea instance itself was being hit directly, which would make sense. It has a whole rendering stack that has to reach out to a database, get data, render the actual webpage through a template…etc
It’s a massive amount of work compared to serving up static files from say Nginx or Caddy. You can stick one of these in front of your servers, and cache http responses (to some degree anyways, that depends on gitea)
Benchmarks like this show what kind of throughput you can expect on say a 4 core VM just serving up cached files: https://blog.tjll.net/reverse-proxy-hot-dog-eating-contest-caddy-vs-nginx/#10-000-clients
90-400MB/s derived from the stats here on 4 cores. Enough to saturate a 3Gb/s connection. And caching intentionally polluted sites is crazy easy since you don’t care if it’s stale or not. Put a cloudflair cache on front of it and even easier.
You could dedicate an old Ryzen CPU (Say a 2700x) box to a proxy, and another RAM heavy device for the servers, and saturate 6Gb/s with thousands and thousands of various software instances that feed polluted data.
Hell, if someone made it a deployable utility… Oof just have self hosters dedicate a VM to shitting on LLM crawlers, make it a party.
You won’t get those numbers from internet requests, they do it locally or in cloud vpc, honestly those benchmarks are shit unless you are ISP. It’s because you have ISP and your router involved before you even receive request. If you have traffic from all over the world there is also speed of light delay. Then you have linux tcp/ip stack and number of open files.
I use openresty, I could add lru cache on top but it doesn’t even make sense because each bot just tries one unique request so you would have to generate html files manually instead of hosting gitea instance.
Gitea is on sqlite database on nvme so db doesn’t really matter. I could put the sqlite on ramdisk as server is using UPS so I don’t care about power outage but this would be ridiculous.
Anyways simplest way is just block ip ranges in firewall and move on.
With the amount of AI generated horseshit out there already, they’ve already pissed in the well.
Been thinking about making one of these too, especially since I have a catchy name :
asbestosMe too, but with procedural image generation. Use some templates which are put together with CPU blitter (extremely fast and effective), add some random descriptive text, then done. Don’t know how much my theory would work IRL.
Small quantities of poisoned training data can significantly damage a language model.
Source: trust me bro.
Nightshade tried the same thing and it never worked.
Here’s your source: https://www.anthropic.com/research/small-samples-poison
Nightshade did work on older models. Newer models adapted to prevent poisoning.
This is a new approach.
Ye, nightshade was defeated by a blur and sharpen iirc lol. Still, was a good first step.
I don’t think this is a good idea. The pollution spreads. this would corrupt the collective knowledge of humanity a little faster than the AI already is doing.
Nah, AI will already do that automatically because any and all system loses something in inefficiencies. Same like if you put a theoretical 100 miles of gas worth in your tank that turns into 20 in practice because the combustion engine has an efficiency of 30ish%, you have air and tire resistance, etc etc.
AI has the same for information, and what comes out is always a certain fraction of the 100% that went in
Since poisoning the pool makes AI unreliable up to the point where it becomes useless, it has the potential to stop the AI madness. I’d be all for that.
Idiots: This new technology is still quite ineffective. Let’s sabotage it’s improvement!
Imbeciles: Yeah!
Corpos: Don’t steal our stuff! That’s piracy!
Also corpos: Your stuff? My stuff now.
Bootlickers: Oh my god this shoe polish is delicious.
Person: Says a thing
Person 2, who disagrees with the thing: YOU’RE A BOOTLICKER!
Super convincing. I’m sure you’re going to win people over to your position if you scream loud enough.
oh no people who like the plagiarism and child porn machine won’t like me. I’m sure it’ll tell them to be very upset about this.
Who said anything about plagiarism and child porn? I was talking about the people who have no ability to actually state or defend a position and seem to believe that insults and hot takes are some kind of substitution.
Replying to a person with insults and nothing else is toxic and does nothing except make you look irrational and ignorant. As is inventing strawman positions rather than responding to a person’s point.
This is true no matter who are are or what position that you’re taking on any subject.
Apparently I didn’t make it clear before so: If you are defending AI slop I do not respect you as a person, nor do I place any value on your opinion. Get blocked.
As is inventing strawman positions rather than responding to a person’s point.
Oh lol, this guy is a Moderator. From the rules of your own Reddit:
- Don’t be an asshole. If you’re reading a comment you’re about to make and think “Hmm… this sounds like the kind of comment an asshole would make” then do not make that comment. Yes, even if the other person “started it”.
That really sounds hypocritical given your responses.
You should select something: whether you like the current copyright system or not. You can’t do both.
Third thing: Point out obvious hypocrisy.
AI companies could start, I don’t know- maybe asking for permission to scrape a website’s data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn’t agree to being used for training?
Why should they ask permission to read freely provided data? Nobody’s asking for any permission, but LLM trainers somehow should? And what do you want from them from an ethical standpoint?
Much of it might be freely available data, but there’s a huge difference between you accessing a website for data and an LLM doing the same thing. We’ve had bots scraping websites since the 90’s, it’s not a new thing. And since scraping bots have existed we’ve developed a standard on the web to deal with it, called “robots.txt”. A text file telling bots what they are allowed to do on websites and how they should behave.
LLM’s are notorious for disrespecting this, leading to situations where small companies and organisations will have their websites scraped so thoroughly and frequently that they can’t even stay online anymore, as well as skyrocketing their operational costs. In the last few years we’ve had to develop ways just to protect ourselves against this. See the “Anubis” project.
Hence, it’s much more important that LLM’s follow the rules than you and me doing so on an individual level.
It’s the difference between you killing a couple of bees in your home versus an industry specialising in exterminating bees at scale. The efficiency is a big factor.
Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?
As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I’ll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.
Killing open source? How?!
The guy is talking about consulting as I understand. Yes, LLM is great for reading the documentation. That’s the purpose of LLM. Now people can use those libraries without spending ages reading through docs. That’s progress. I see it as a way to write more open source because it became simpler and less tedious.
He’s jumping ship because it’s destroying his ability to eke out a living. The problem isn’t a small one, what’s happening to him isn’t a limited case.
As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.
But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.
Just put the keyboard down and walk away.
I don’t have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.
I am just curious, do you respect robots.txt?
I think it’s worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.
Also, engaging in Internet debate is never to convince the person you’re actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.
I agree with you that there can be value in “showing people that views outside of their likeminded bubble[s] exist”. And you can’t change everyone’s mind, but I think it’s a bit cynical to assume you can’t change anyone’s mind.
I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.
It’s really unproductive to make blanket statements that try to end discussion before it starts.
I don’t know, it seems like their comment accurately predicted the response.

Even if you want to see yourself as some beacon of open and honest discussion, you have to admit that there are a lot of people who are toxic to anybody who mentions any position that isn’t rabidly anti-AI enough for them.
This is a subject that people (understandably) have strong opinions on. Debates get heated sometimes and yes, some individuals go on the attack. I never post anything with the expectation that no one is going to have bad feelings about it and everyone is just going to hold hands and sing a song.
There are hard conversations that need to be had regardless. All sides of an argument need to be open enough to have it and not just retreat to their own cushy little safe zones. This is the Fediverse, FFS.
Yes, they should because they generate way more traffic. Why do you think people are trying to protect websites from AI crawlers? Because they want to keep public data secret?
Also, everyone knows AI companies used copyrighted materials and private data without permission. If you think they only used public data you’re uninformed or lying on their behalf.
I personally consider the current copyright laws completely messed up, so I see no problem in using any data technically available for processing.
Ok, so you think it’s ok for big companies to break the laws you don’t like, cool. I’m sure those big companies will not sue you when you infringe on some of their laws you don’t like.
And I like the way you just ignored the two other issues I mentioned. Are you fine with AI bots slowing sites like Codeberg to a crawl? Are you fine with AI companies using personal data without consent?
I’m fine with companies using any freely available data.
I’m also fine with them using data they can get for free like, I don’t know, weather data they collect themselves?
Data hosted by private individuals and open source projects is not free. Someone has to pay for hosting and AI companies sucking data with army of bots is elevating the cost of hosting beyond the means of those people/projects. They are shifting the costs of providing the “free” data on the community while keeping all the profits.
Private data used without consent is also not free. It’s valuable, protected data and AI companies are simply stealing it. Do you consider stolen things free?
I see your attitude is “they don’t hurt me personally and I don’t care what they do to other people”. It’s either ignorant or straight antisocial. Also a bit bootlickish.
Doesn’t work, but I guess if it makes people feel better I suppose they can waste their resources doing this.
Modern LLMs aren’t trained on just whatever raw data can be scraped off the web any more. They’re trained with synthetic data that’s prepared by other LLMs and carefully crafted and curated. Folks are still thinking ChatGPT 3 is state of the art here.
From what I’ve heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn’t quite mastered.
I’ve also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can’t personally speak to its efficacy.
Faults in replication? That can become cancer for humans. AI as well I guess.
Let’s say I believe you. If that’s the case, why are AI companies still scraping everything?
Raw materials to inform the LLMs constructing the synthetic data, most likely. If you want it to be up to date on the news, you need to give it that news.
The point is not that the scraping doesn’t happen, it’s that the data is already being highly processed and filtered before it gets to the LLM training step. There’s a ton of “poison” in that data naturally already. Early LLMs like GPT-3 just swallowed the poison and muddled on, but researchers have learned how much better LLMs can be when trained on cleaner data and so they already take steps to clean it up.
Do you have any basis for this assumption, FaceDeer?
Based on your pro-AI-leaning comments in this thread, I don’t think people should accept defeatist rhetoric at face value.
A basic Google search for “synthetic data llm training” will give you lots of hits describing how the process goes these days.
Take this as “defeatist” if you wish, as I said it doesn’t really matter. In the early days of LLMs when ChatGPT first came out the strategy for training these things was to just dump as much raw data onto them as possible and hope quantity allowed the LLM to figure something out from it, but since then it’s been learned that quality is better than quantity and so training data is far more carefully curated these days. Not because there’s “poison” in it, just because it results in better LLMs. Filtering out poison will happen as a side effect.
It’s like trying to contaminate a city’s water supply by peeing in the river upstream of the water treatment plant drawing from it. The water treatment plant is already dealing with all sorts of contaminants anyway.
That may be an argument if only large companies existed and they only trained foundation models.
Scraped data is most often used for fine-tuning models for specific tasks. For example, mimicking people on social media to push an ad/political agenda. Using a foundational model that speaks like it was trained on a textbook doesn’t work for synthesizing social media comments.
In order to sound like a Lemmy user, you need to train on data that contains the idioms, memes and conversational styles used in the Lemmy community. That can’t be created from the output of other models, it has to come from scraping.
Poisoning the data going to the scrapers will either kill the model during training or force everyone to pre-process their data, which increases the costs and expertise required to attempt such things.
Are you proposing flooding the Fediverse with fake bot comments in order to prevent the Fediverse from being flooded with fake bot comments? Or are you thinking more along the lines of that guy who keeps using “Þ” in place of “th”? Making the Fediverse too annoying to use for bot and human alike would be a fairly phyrric victory, I would think.
I am proposing neither of those things.
The way to effectively use this is to detect scraping through established means and, instead of banning them, altering the output to feed the target poisoned data instead of/in addition to the real content.
Banning a target gives them information about when they were detected and allows them to alter their profile to avoid that. If they’re never banned then they lose that information and also they now have to deploy additional resources to attempt to detect and remove poisoned data.
Either way, it causes the adversary to spend a lot of resources at very little cost to you.
I have no idea what “established means” would be. In the particular case of the Fediverse it seems impossible, you can just set up your own instance specifically intended for harvesting comments and use that. The Fediverse is designed specifically to publish its data for others to use in an open manner.
The Fediverse is designed specifically to publish its data for others to use in an open manner.
Sure, and if the AI companies want to configure their crawlers to actually use APIs and ActivityPub to efficiently scrape that data, great. Problem is that there’s been crawlers that have done things very inefficiently (whether by malice, ignorance, or misconfiguration) and scrape the HTML of sites repeatedly, driving up some hosting costs and effectively DOSing some of the sites.
If you put Honeypot URLs in the mix and keep out polite bots with robots.txt and keep out humans by hiding those links, you can serve poisoned responses only to the URLs that nobody should be visiting and not worry too much about collateral damage to legitimate visitors.
This is just stupid^20







