Funny that they’re calling them AI haters when they’re specifically poisoning AI that ignores the do not enter sign. FAFO.
First Albatross, First Out
Fair As Fuck Ok?
Sheesh people, it’s “fuck around and find out”. Probably more appropriate in the leopards eating face context but this works enough.
What are you talking about? FAFO obviously stands for “fill asshole full of”. Like FAFO dicks. Or FAFO pennies.
I’m glad you’re here to tell us these things!
Deployment of Nepenthes and also Anubis (both described as “the nuclear option”) are not hate. It’s self-defense against pure selfish evil, projects are being sucked dry and some like ScummVM could only freakin’ survive thanks to these tools.
Those AI companies and data scrapers/broker companies shall perish, and whoever wrote this headline at arstechnica shall step on Lego each morning for the next 6 months.
one of the united Nations websites deployed Anubis
Do you have a link to a story of what happened to ScummVM? I love that project and I’d be really upset if it was lost!
Thank you!
Thanks, interesting and brief read!
Wait what? I am uninformed, can you elaborate on the ScummVM thing? Or link an article?
From the Fabulous Systems (ScummVM’s sysadmin) blog post linked by Natanox:
About three weeks ago, I started receiving monitoring notifications indicating an increased load on the MariaDB server.
This went on for a couple of days without seriously impacting our server or accessibility–it was a tad slower than usual.
And then the website went down.
Now, it was time to find out what was going on. Hoping that it was just one single IP trying to annoy us, I opened the access log of the day
there were many IPs–around 35.000, to be precise–from residential networks all over the world. At this scale, it makes no sense to even consider blocking individual IPs, subnets, or entire networks. Due to the open nature of the project, geo-blocking isn’t an option either.
The main problem is time. The URLs accessed in the attack are the most expensive ones the wiki offers since they heavily depend on the database and are highly dynamic, requiring some processing time in PHP. This is the worst-case scenario since it throws the server into a death spiral.
First, the database starts to lag or even refuse new connections. This, combined with the steadily increasing server load, leads to slower PHP execution.
At this point, the website dies. Restarting the stack immediately solves the problem for a couple of minutes at best until the server starves again.
Anubis is a program that checks incoming connections, processes them, and only forwards “good” connections to the web application. To do so, Anubis sits between the server or proxy responsible for accepting HTTP/HTTPS and the server that provides the application.
Many bots disguise themselves as standard browsers to circumvent filtering based on the user agent. So, if something claims to be a browser, it should behave like one, right? To verify this, Anubis presents a proof-of-work challenge that the browser needs to solve. If the challenge passes, it forwards the incoming request to the web application protected by Anubis; otherwise, the request is denied.
As a regular user, all you’ll notice is a loading screen when accessing the website. As an attacker with stupid bots, you’ll never get through. As an attacker with clever bots, you’ll end up exhausting your own resources. As an AI company trying to scrape the website, you’ll quickly notice that CPU time can be expensive if used on a large scale.
I didn’t get a single notification afterward. The server load has never been lower. The attack itself is still ongoing at the time of writing this article. To me, Anubis is not only a blocker for AI scrapers. Anubis is a DDoS protection.
The ars technica article: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt
AI tarpit 1: Nepenthes
AI tarpit 2: Iocaine
Thank you!!
thanks for the links. the more I read of this the more based it is
I’m so happy to see that ai poison is a thing
Don’t be too happy. For every such attempt there are countless highly technical papers on how to filter out the poisoning, and they are very effective. As the other commenter said, this is an arms race.
Nice … I look forward to the next generation of AI counter counter measures that will make the internet an even more unbearable mess in order to funnel as much money and control to a small set of idiots that think they can become masters of the universe and own every single penny on the planet.
All the while as we roast to death because all of this will take more resources than the entire energy output of a medium sized country.
Actually if you think about it AI might help climate change become an actual catastrophe.
It is already!
I will cite the scientific article later when I find it, but essentially you’re wrong.

What about training an AI?
According to https://arxiv.org/abs/2405.21015
The absolute most monstrous, energy guzzling model tested needed 10 MW of power to train.
Most models need less than that, and non-frontier models can even be trained on gaming hardware with comparatively little energy consumption.
That paper by the way says there is a 2.4x increase YoY for model training compute, BUT that paper doesn’t mention DeepSeek, which rocked the western AI world with comparatively little training cost (2.7 M GPU Hours in total)
Some companies offset their model training environmental damage with renewable and whatever bullshit, so the actual daily usage cost is more important than the huge cost at the start (Drop by drop is an ocean formed - Persian proverb)
We’re racing towards the Blackwall from Cyberpunk 2077…
Already there. The blackwall is AI-powered and Markov chains are most definitely an AI technique.
Could you imagine a world where word of mouth became the norm again? Your friends would tell you about websites, and those sites would never show on search results because crawlers get stuck.
No they wouldn’t. I’m guessing you’re not old enough to remember a time before search engines. The public web dies without crawling. Corporations will own it all you’ll never hear about anything other than amazon or Walmart dot com again.
Nope. That isn’t how it worked. You joined message boards that had lists of web links. There were still search engines, but they were pretty localized. Google was also amazing when their slogan was “don’t be evil” and they meant it.
I was there. People carried physical notepads with URLs, shared them on BBS’, or other forums. It was wild.
No. Only very selective people joined message boards. The rest were on AOL, compact or not at all. You’re taking a very select group of.people and expecting the Facebook and iPad generations to be able to do that. Not going to happen. I also noticed some people below talking about things like geocities and other minor free hosting and publishing site that are all gone now. They’re not coming back.
Yep, those things were so rarely used … sure. You are forgetting that 99% of people knew nothing about computers when this stuff came out, but people made themselves learn. It’s like comparing Reddit and Twitter to a federated alternative.
Also, something like geocities could easily make a comeback if the damn corporations would stop throwing dozens of pop-ups, banners, and sidescrolls on everything.
And 99% of people today STILL don’t know anything about computers. Go ask those same people simply “what is a file” they won’t know. Lmao. Geocities could come back if corporations stop advertising. Do you even hear yourself.
That would be terrible, I have friends but they mostly send uninteresting stuff.
Fine then, more cat pictures for me.
There used to be 3 or 4 brands of, say, lawnmowers. Word of mouth told us what quality order them fell in. Everyone knew these things and there were only a few Ford Vs. Chevy sort of debates.
Bought a corded leaf blower at the thrift today. 3 brands I recognized, same price, had no idea what to get. And if I had had the opportunity to ask friends or even research online, I’d probably have walked away more confused. For example; One was a Craftsman. “Before, after or in-between them going to shit?”
Got off topic into real-world goods. Anyway, here’s my word-of-mouth for today: Free, online Photoshop. If I had money to blow, I’d drop the $5/mo. for the “premium” service just to encourage them. (No, you’re not missing a thing using it free.)
Removed by mod
This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.
It’s not like I can afford to compete with OpenAI on bandwidth, and they’re burning through money with no cares already.
Wait… I just had an idea.
Make a tarpit out of subtly-reprocessed copies of classified material from Wikileaks. (And don’t host it in the US.)
Such a stupid title, great software!
Nice one, but Cloudflare do it too.
Why are the photos all ugly biological things
The reason you’re seeing biological photos in AI articles lately is tied to a recent but underreported breakthrough in processor technology: bio-silicon hybrids. They’re early-stage biological processors that integrate living neural tissue with traditional silicon circuits. Several research labs, including one backed by DARPA and the University of Kyoto, have successfully grown functional neuron clusters that can perform pattern recognition tasks with far less energy than conventional chips.
The biological cells react more collectively and with a higher success rate than the current systems. Think of it kind of how a computer itself is fast but parts can wear out (water cooled tubes or fan), whereas the biological cell systems will collectively react and if a few cells die, they may just create more. It’s really a crazy complex and efficient breakthrough.
The images of brains, neurons, or other organic forms aren’t just symbolic anymore—they’re literal. These bio-processors are being tested for edge computing, adaptive learning, and even ethical decision modeling.
The actual reason is that the use of biological photos is a design choice meant to visually bridge connect artificial intelligence and human intelligence. These random biological photos help to convey the idea that AI is inspired by or interacts with human cognition, emotions, or biology. It’s also a marketing tactic: people are more likely to engage with content that includes familiar, human-centered visuals. Though it doesn’t always reflect the technical content, it does help to make abstract or complex topics more relatable to a larger/extended audience.
I see what you-gpt did there
I’m going to assume half of that comment is wrong
Some of it, yeah. I typed up the middle, then ran a separate prompt, and added to it. If you can see the edits, the original was organic and then the edit was adding to it
That’s… actually quite terrifying.
The sci-fi concern over whether computers could ever be truly “alive” becomes a lot more tangible when literal living biological systems are implemented.
Some details. One of the major players doing the tar pit strategy is Cloudflare. They’re a giant in networking and infrastructure, and they use AI (more traditional, nit LLMs) ubiquitously to detect bots. So it is an arms race, but one where both sides have massive incentives.
Making nonsense is indeed detectable, but that misunderstands the purpose: economics. Scraping bots are used because they’re a cheap way to get training data. If you make a non zero portion of training data poisonous you’d have to spend increasingly many resources to filter it out. The better the nonsense, the harder to detect. Cloudflare is known it use small LLMs to generate the nonsense, hence requiring systems at least that complex to differentiate it.
So in short the tar pit with garbage data actually decreases the average value of scraped data for bots that ignore do not scrape instructions.
The fact the internet runs on lava lamps makes me so happy.
Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it’s still limited in bandwith they can occupy.
They make one request per IP. Rate limit per IP does nothing.
Ah, one request, then the next IP doing one and so on, rotating? I mean, they don’t have unlimited adresses. Is there no way to group them together to a observable group, to set quotas? I mean, in the purpose of defense against AI-DDOS and not just for hurting them.
I click links frequently and I’m not a web crawler. Example: get search results, open several likely looking possibilities (only takes a few seconds), then look through each one for a reasonable understanding of the subject that isn’t limited to one person’s bias and/or mistakes. It’s not just search results; I do this on Lemmy too, and when I’m shopping.
Ok, same, make it 5 or 10. Since i use Tree Style Tabs and Auto Tab Discard, i do get a temporary block in some webshops, if i load (not just open) too much tabs in too short time. Probably a CDN thing.
Would you mind explaining your workflow with these tree style tabs? I am having a hard time picturing how they are used in practice and what benefits they bring.
deleted by creator
I’ve suggested things like this before. Scrapers grab data to train their models. So feed them poison.
Things like counter factual information, distorted images / audio, mislabeled images, outright falsehoods, false quotations, booby traps (that you can test for after the fact), fake names, fake data, non sequiturs, slanderous statements about people and brands etc… And choose esoteric subjects to amplify the damage caused to the AI.
You could even have one AI generate the garbage that another ingests and shit out some new links every night until there is an entire corpus of trash for any scraper willing to take it all in. You can then try querying AIs about some of the booby traps and see if it elicits a response - then you could even sue the company stealing content or publicly shame them.
Kind of reminds me of paper towns in map making.
When I was a kid I thought computers would be useful.
They are. Its important to remember that in a capitalist society what is useful and efficient is not the same as profitable.


















