Toward a Taxonomy of Bad Moderation

Like many people, I dusted off my Mastodon account when Musk signed the agreement to buy Twitter. When the deal got close to completion, I asked Jason to make me an admin on our tiny the-orbit.net server and set about preparing for more traffic.

I started by reviewing the #fediblock hashtag—where the Fediverse communicates about bad actors and safety—and our own list of silenced and blocked servers. I began there because, well, we all have plenty of experience being harassed around here. I didn’t have the power to keep harassers off the technology, but I did have the tools to take care of the most obvious threats.

I was working on a way to systematize our reasons for moderating at the server level when our own server ended not in fire but in ice in an upgrade. Given Jason was already concerned about having enough time for it, I suggested he let it go. He told our handful of sporadic users it was time to find a new instance.

I’ve still been thinking about the system, though, partly because I do that and partly because I’m watching the discussions about moderation on Mastodon closely. Technocrats are talking to social engineers and activists are talking to people targeted for harassment are talking to scholars are talking to people who’ve never had to think about moderation until today.

It’s messy and made messier by:

  • A lack of common purpose in using social media. This is true for any service, but it’s particularly obvious in the Fediverse, where individual servers are often organized around these purposes.
  • A wave of new servers with new administrators and new moderators, many of whom are not aware of the long arguments about moderation and whose resource materials are mostly technical.
  • Rapid growth reflected in technological chaos that makes following current events in the Fediverse more difficult.
  • Disorganized social networks that haven’t resettled after service and server moves, such that many of us have been talking into the ether instead of discussing it among people with experience in the topic.
  • Targeted harassment of admins and moderators who openly share their block lists and reasons for blocking.
  • A history of abuse of moderation tools in the Fediverse.

Much of the current discussion is about how to consolidate knowledge about servers with bad or nonexistent moderation, so each individual moderator doesn’t have to learn separately and may be able to automate some decisions. I’ve also seen alarm at the idea, coming from people who see moderation decisions they don’t understand or wouldn’t choose.

I believe that grouping the types of bad moderation likely to be encountered by its consequences and the actions needed to mitigate it may help in making such moderation feasible. This list is roughly in order of priority, with the most pressing issues first.

Illegal Content

There are servers that host content that may get you arrested if it ends up on your server. Unless you’re making a stand by practicing civil disobedience, you don’t have much choice on these. If you are taking that stand, you should expect the vast majority of the Fediverse to lock you out.

Eliminationist

These are servers that organize around the people they think shouldn’t exist or shouldn’t have rights. These are the neo-Nazis, the ultranationalists, the religious nationalists, the people trying to deny health care and public bathrooms to trans people, the people who call for violence against abortion providers, other stochastic terrorists, and other outright terrorists.

There is no reason for any decent server to give eliminationists access to your users. They will only use it to recruit and to attack. That’s what they organized to do. Suspend them.

Freeze Peach

These are servers that tell you up front they are organized around “free speech” or that anything goes on the server as long as it isn’t illegal or porn. There may even be some administrators on these servers who believe that, though I’ve seen several with neo-Nazi administrators. In practice, however, even if these servers didn’t set out to be eliminationist or harassing instances, these servers are where bad actors collect when they get kicked off well-moderated servers.

Pleroma is an alternative to Mastodon for running instances in the Fediverse. The development history of Pleroma is such that the software is used by numerous eliminationist and harassing instances. The association is frequent enough that “Pleroma instance” has become a shorthand for a freeze peach instance.

If you see a very new server that looks decent but has “free speech” rules, you might want to take the time to suggest they get a real code of conduct if you’re feeling generous. Otherwise, suspend them. The people who are happy in the cesspits these instances become are going to cause you problems, and freeze peach mods aren’t going to help.

Spam

Not all promotion is spam. “Spam”, here, specifically means more intrusive types of communication, such as tagging people to get their attention, or misusing hashtags or groups for promotion. As with other types of bad behavior, once a server is known not to moderate spam coming from its users, it will probably be swamped with users who spam.

NSFW

The internet is for porn and other sex work, but you usually have to go looking for it, because the internet is also for business and children. Your instance may not allow children, but if it’s organized around business interests or activism, you’ll want to give some real thought to your policies around NSFW materials. These materials have a long history of being used in sexual harassment to create and signify hostile environments. On the other hand, many people experience censorship because some aspect of their identity is sexualized and declared off limits.

There are lots of tools to help you limit how your users interact with NSFW materials on other servers and how that affects other users’ timelines. The only broad guidelines are to be thoughtful about your choices and transparent with your users so they can find an instance that meets their needs.

Disinformation

These are collections of users who post antivax, crypto, climate, political or other types of misinformation frequently enough that the server itself becomes a significant source of bad information, either as policy or through being unwilling to moderate it. How you handle one of these servers will likely depend on the type of disinformation. Individual servers may choose to keep, for example, flat Earthers around as entertainment but may block political disinformation as a threat to democracy.

Good in Theory, Bad in Practice

Having policies is easy. Enforcing them is often a miserable slog. You see things you can’t unsee. Bad actors test your boundaries regularly. You hear from lovely, charming people mostly when they’re upset. Written rules collide with unwritten rules. Competing access needs are real. Moderators aren’t going to get everything right.

That said, there comes a point where repeated mistakes suggest an underlying problem exists and is likely to lead to more mistakes. Right now, instances are starting up or growing without planning ahead for the moderation growth will require. Then they’re making major, high-profile mistakes. It becomes reasonable for other instances to decide they’re bad at moderating and are going to stay bad without major course corrections.

As a moderator, you can offer help, but you have to choose how much of another instance’s moderation you’re willing to take on and for how long. If they don’t, the work falls on you. Defederation and the threat of defederation are your tools for doing that.

Critical-Issue Fail

As mentioned previously, competing access needs are real. They also don’t only apply in a disability framework or even among people with different goals. For example, activists who do policy work need a degree of access to government entities, while activists who do community care work on the same topics may need to keep themselves and those they serve far from the eye of those same entities because current policy hurts them.

Both groups may do important work benefiting the same group of people, but they’re unlikely to want the same federation policies. Neither of them is wrong to federate or defederate based on their needs, and all of them should be able to talk about their decisions and the behavior on a server that led to it.

That’s as far as I’ve gotten in trying to group moderation issues that may lead one server to defederate from another. What’s missing? What’s redundant? Is the framework useful?

{advertisement}
Toward a Taxonomy of Bad Moderation
{advertisement}

3 thoughts on “Toward a Taxonomy of Bad Moderation

  1. 2

    Great stuff! Thanks for putting it together.

    Not fully grokking how “Critical-Issue Fail” works as a label for that category, but the category makes sense to me.

Comments are closed.