Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
While the study itself is a good read and I agree with the conclusions—Mastodon, and decentralized social media need better moderation tools—it’s hard to not read the Verge headline as misleading. One of the study authors gives more context here https://hachyderm.io/@det/110769470058276368. Basically most of the hits came from a large Japanese instance that no one federates with; the author even calls out that the blunt instrument most Mastodon admins use is to blanket defederate with instances hosted in Japan due to their more lax (than the US) laws around CSAM. But the headline seems to imply that there’s a giant seedy underbelly to places like mastodon.social[1] that are rife with abuse material. I suppose that’s a marketing problem of federated software in general.
- There is a seedy underbelly of mainstream Mastodon instances, but it’s mostly people telling you how you’re supposed to use Mastodon if you previously used Twitter.
The person outright rejects defederation as a solution when it IS the solution, if an instance is in favor of this kind of thing you don’t want to federate with them, period.
I also find worrying the amount of calls for a “Fediverse police” in that thread, scanning every image that gets uploaded to your instance with a 3rd party tool is an issue too, on one side you definitely don’t want this kinda shit to even touch your servers and on the other you don’t want anybody dictating that, say, anti-union or similar memes are marked, denounced and the person who made them marked, targeted and receiving a nice Pinkerton visit.
This is a complicated problem.
Edit: I see somebody suggested checking the observations against the common and well used Mastodon blocklists, to see if the shit is contained on defederated instances, and the author said this was something they wanted to check, so i hope there’s a followup
Yeah I recall that the Japanese instances have a big problem with that shit. As for the rest of us, Facebook actually open sourced some efficient hashing algorithms for use for dealing with CSAM; Fediverse platforms could implement these, which would just leave the issue of getting an image hash database to check against. All the big platforms could probably chip in to get access to one of those private databases and then release a public service for use with the ecosystem.
That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.
We’re not gonna fix society using tech unless we’re all hooked up to some all knowing AI under government control.
That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.
No it wouldn’t, because it’d still be significantly easier for instances to deal with CSAM content with this functionality built into the platforms. And I highly doubt there’s going to be a mass migration from any Fediverse platform that implements such a feature (though honestly I’d be down to defederate with any instance that takes serious issue with this).
And the instances who want to engage with that material would all opt for the fork and be done with it. That’s all I meant.
Right, and the rest of us would be able to more effectively filter it out from our instances.
Of course, I didn’t say that though.
I’m not fully sure about the logic and perhaps hinted conclusions here. The internet itself is a network with major CSAM problems (so maybe we shouldn’t use it?).
It doesn’t help to bring whataboutism into this discussion. This is a known problem with the open nature of federation. So is bigotry and hate speech. To address these problems, it’s important to first acknowledge that they exist.
Also, since fed is still in the early stages, now is the time to experiment with mechanisms to control them. Saying that the problem is innate to networks is only sweeping it under the rug. At some point there will be a watershed event that’ll force these conversations anyway.
The challenge is in moderating such content without being ham-fisted. I must admit I have absolutely no idea how, this is just my read of the situation.
Maybe my comment wasn’t clear or you misread it. It wasn’t meant to be sarcastic. Obviously there’s a problem and we want (not just need) to do something about it. But it’s also important to be careful about how the problem is presented - and manipulated - and about how fingers are pointed. One can’t point a finger at “Mastodon” the same way one could point it at “Twitter”. Doing so has some similarities to pointing a finger at the http protocol.
Edit: see for instance the comment by @while1malloc0@beehaw.org to this post.
Understood, thanks. Yes I did misread it as sarcasm. Thanks for clearing that up :)
However I disagree with @shiri@foggyminds.com in that Lemmy, and the Fediverse, are interfaced with as monolithic entities. Not just by people from the outside, but even by its own users. There are people here saying how they love the community on Lemmy for example. It’s just the way people group things, and no amount of technical explanation will prevent this semantic grouping.
For example, the person who was arrested for CSAM recently was running a Tor exit node, but that didn’t help his case. As shiri pointed out, defederation works for black-and-white cases. But what about in cases like disagreement, where things are a bit more gray? Like hard political viewpoints? We’ve already seen the open internet devolve into bubbles with no productive discourse. Federation has a unique opportunity to solve that problem starting from scratch, and learning from previous mistakes. Defed is not the solution, it isn’t granular enough for one.
Another problem defederation is that it is after-the-fact and depends on moderators and admins. There will inevitably be a backlog (pointed out in the article). With enough community reports, could there be a holding-cell style mechanism in federated networks? I think there is space to explore this deeper, and the study does the useful job of pointing out liabilities in the current state-of-the-art.
Another way to look at it is: How would you solve this problem with email?
The reality is, there is no way to solve the problem of moderation across disparate servers without some unified point of contact. With any form of federation, your options are:
- close-source the protocol, api, and implementation and have the creator be the final arbiter, either by proxy of code, or by having a back door
- Have every instance agree to a singular set of rules/admins
- Don’t and just let the instances decide where to draw lines.
The reality is, any federated system is gonna have these issues, and as long as the protocol is open, anyone can implement any instance on top of it they want. It would be wonderful to solve this issue “properly”, but it’s like dealing with encryption. You can’t force bad people to play by the rules, and any attempt to do so breaks the fundamental purpose of these systems.
I share and promote this attitude. If I must be honest it feels a little hopeless: it seems that since the 1970s or 1980s humanity has been going down the drain. I fear “fediverse wars”. It’s 2023 and we basically have a World War III going on, illiteracy and misinformation steadily increase, corporations play the role of governments, science and scientific truth have become anti-Galilean based on “authorities” and majority votes, and natural stupidity is used to train artificial intelligence. I just feel sad.
But I don’t mean to be defeatist. No matter the chances we can fight for what’s right.
The internet itself is a network with major CSAM problems
Is it, though?
Over the last year, I’ve seen several reports on TV of IRL group abuse of children, by other children… which left everyone scratching their heads as to what to do since none of the perpetrators are legally imputable.
During that same time, I’ve seen exactly 0 (zero) instances of CSAM on the Internet.
Sounds to me like IRL has a major CSAM, and general sex abuse, problem.
Pedos that got banned from platforms turn to other platform who hasnt done it yet
In other news: the sky is blue
While white knights propose ways to control everyone everywhere everytime, in the name of catching the pedos who will just hop to the next platform (or have already).
The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.
I agree, but who’s going to pay for it? Those aren’t just freely available additions to any application that you only need to toggle on.
I agree, but who’s going to pay for it?
How about police/the tax payer?
If university researchers can find the stuff, then police can find it too, and they can somehow flag the user (or even the entire instance) so that their content can be removed from the fediverse while simultaneously asking for all data that is available to try to catch the criminals.
One way to do this is to block hashes. This is a slippery slope though because it could be used maliciously. Only way to do this and protect freedom of information is to make this fully open source.
Image hashes? That could work. It could be a simple system like uBlock where you import filter lists to your instance and they’re easy to disable if their caretakers fill them with garbage data.
The researchers can’t be taken seriously if they don’t acknowledge that you can’t force free software to do something you don’t want it to.
Even if we started way down at the stack and we added a CSAM hash scanner to the Linux kernel, people would just fork the kernel and use their own build without it.
Same goes for nginx or any other web server or web proxy. Same goes for Tor. Same goes for Mastodon or any other Fedi/ActivityPub implementation.
It. Does. Not*. Work.
* Please, prove me wrong, I’m not all knowing, but short of total surveillance, I see no technical solution to this.
Is there any way mastodon stands out from other self hosted websites? Would the CSAM material be harder to distribute or easier to prosecute if they ran, say, a self-hosted bulletin board for it instead?
This is one of the things I don’t like about the whole Twitter format. There’s no moderator layer. Every lemmy community must be created by a moderator and that mod can be held accountable.
There isn’t even a concept of communities on Twitter / Mastodon. Hashtags? Nobody owns monitoring them, and they can be freely improvised at will. It really is just the instance and its zillion users with nothing in between. Imagine a lemmy instance admin being responsible for all the moderation… would never work.
not surprised at all. this is a growing pain here too because this was previously a thing handled invisibly by platforms and federation makes it fall to individual sysadmins and whoever they have on staff. the tools for this stuff are, in general, not here yet–and as people have noted there are potential conflicts with some of the principles of federation introduced by those tools that can’t be totally handwaved.
I browsed through an anime instance while trying to convince myself to like Mastodon and unfortunately I believe I’ve found some of this myself. I wasn’t going to confirm it was real, I just reported and closed out but considering I’ve never seen such content on other websites and this instance was rife with it, I don’t find this article hard to believe at all.
Mastodon.art doesn’t.
And the beauty of Mastodon is you can block an entire instance, as can your admin, when something awful is posted. Mastodon even has a hashtag they use as an alert for this kind of thing. (#Fediblock)
Removed by mod
This is a whataboutist counterpoint at best. Universities and their researchers are not a monolith.
This is just bad press. The actual study is quite good and offers good recommendations on how to improve moderation on the fediverse
I think some of the problematic instances have been defederated, IIRC there’s a large japanese instance that was defederated long time ago due to child abuse content. But still since I’ve been seeing increases of hate speech and dog whistling misogyny and homophobia in some instances, I won’t be surprised if CSAM stuff has been trading under our noses.
The main issue is that, with so many users nowadays and small moderation teams, especially in the larger instances, it’s hard to moderate and tackle CSAM problems effectively. I really wish larger instances would limit user registrations or start splitting off into smaller manageable ones.
Also, since they are trading using certain hashtags, blocking those hashtags might not be a bad idea.











