How Twitter Can Combat Harassment in Three Easy Steps

Dear Twitter:

A language hotspot map using color to designate threat level.
Shouldn’t hotspots always receive the most attention?

I can see that you’re overwhelmed with the idea of policing your service. It’s been obvious for a while, but when you start issuing service ticket numbers for complaints without having any way for people to check the status of those tickets, then you’re shouting it from the rooftops. So here’s a little suggestion about how to make your own lives easier while still cleaning up your service to keep people on it long enough to see your sponsored tweets. And it will only take three easy steps.

  1. You already have an algorithm that detects spikes in the use of phrases or hashtags. It’s what you use to create your trending topics. Use that algorithm to detect when people’s mentions spike. Sure, it will take a little bit of fine-tuning, because the spikes are smaller, but it will be worth it.
  2. Why? Because your next step is to set someone in Twitter support on the job of looking through that person’s mentions. Again, this will be an easy job, because all this person needs to do is determine whether this person’s mentions are full of something benign, like congratulations, or full of the kind of toxic crap my friend Melody Hensley is still receiving two days after I documented an onslaught of abuse. The difference is easy to spot. Go take a look.
  3. Once you’ve identified a thread with a high degree of abuse, go through and clean it out. Ban your repeat offenders and accounts freshly created for the purpose of abusing someone. Suspend and/or warn your first-timers depending on their degree of depravity, and mark their accounts as having been warned so you know when you see them again.

That’s it. You’re done. You’ve found the people who exist to make your service hell for other people, and you’ve dealt with them en masse. You’ve gone to the trouble spots and dealt with the troublemakers. You haven’t had to go through and individually look at tickets for each one and individually look at all the tweets involved. Sure, you’ll still get tickets on smaller situations, but there will be a whole lot fewer of them.

No need to thank me or credit me for making your jobs easier or your service more user-friendly. Just take the advice and make it happen. Clean the place up.

Image: “Language Hotspots” by whiteafrican.

{advertisement}
How Twitter Can Combat Harassment in Three Easy Steps
{advertisement}

18 thoughts on “How Twitter Can Combat Harassment in Three Easy Steps

  1. 1

    This is, of course, the obvious, logical, and most efficacious solution to the problem. It requires the least handwringing and extra noise from Twitter staff, and would have the effect of using the existing dynamics of Twitter to find the problem spots. And it would send a clear message to everyone that Twitter will not tolerate abusers.

    So, we can rest assured that it will not be implemented.

  2. 2

    It’s easier than you propose; simply queue inputs past a bayesian classifier trained with keyword codices for various forms of abuse. If it’s trained with 3-word sets it’ll even accurately distinguish between “protest against rape” and “you should be raped” Once the training sets are built the only way to avoid matching them is to use whole new forms and vocabularies of invective. It’s not a hard problem, really. It’s what computers are good at. I always make a combined laughing/groaning sound when someone talks about the kind of genius programmers that are building today’s social media sites. Uh, yeah, buncha lame amateurs, really.

  3. 4

    PS – the kind of system I described above is 99.9% accurate. When something gets through, it gets added to the training set. Over a very short period of time, the system becomes very difficult to fool. If all they did was gave people a pop-up that read:
    “You appear to be about to post abuse/stalking/hate speech in your comment. Your comment will be queued for review. If you wish to proceeed after this warning and an administrator later determines your comment was inappopriate, your account will be suspended. Otherwise, you can queue this comment for administrator approval and will suffer no consequences. Do you wish to continue? (post/queue/cancel)”

  4. 7

    Or they don’t want to “make a statement”.

    Wow, that “is still receiving” link is… blech. I read through a few, then got a “3 new” flashy message at the top after like 30 seconds, clicked it and more bile came in (I don’t use Twitter so excuse my twitless lingo).

    What. In. The. Crap. Scrolled way down just seeing if it was from lots of different peeps or all the same few (lots of different), and realized I was only 5 hours back.

    Damn, internet. Fuuuuuuuuuuuuuuuuuck.

    [Miles suffers -10 faith in humanity.]

  5. 8

    I think the thing to consider is that if twitter processes 400 million messages a day, any automated classifier needs to have an essentially zero false positive rate, or in terms of absolute numbers of reported tweets the number of errors would still vastly outweigh the number of true detections.

    Machine learning approaches could be used to prioritise report tickets, however, and the moderation process could be streamlined.

  6. 9

    While I don’t think a fully tech solution would be nearly as easy or as powerful as Marcus describes – Bayesian systems are powerful, but not THAT powerful, and you’d still need somebody to maintain the rules – I think it could easily identify the most abusive tweets and tweeters, and would still have some value.
    And as long as Twitter makes it faster to create a new account than it does to ban one, they’re going to have a problem with trolls.
    And, of course, Twitter has made it clear that they don’t care about fixing it.

  7. 11

    Such algorithms could also be improved by:

    Monitoring how the person is responding to the spike. If there’s lots of blocking, and little favouriting, that suggests something untoward’s going on.

    Using network analysis to detect how new people are being drawn into the conversation. If a large fraction of the spike is coming from people who have no apparent link to the target, that suggests that recruitment is happening elsewhere. Similarly, if blocked individuals are drawing previously unconnected individuals into the conversation, that suggests harassment.

    Looking for an unusually high fraction of accounts from the same IP address, especially if more than one gets blocked. In particular, look for patterns where a user gets blocked, and then a new account is created from the same IP, which immediately starts Tweeting to the target.

  8. 12

    Unfortunately all it will take is someone to cry “But… calm, rational debate! Polite disagreement!” and Twitter won’t touch it with a 10 foot pole out of fear of alienating part of their user base. I think it’d take a huge amount of public pressure to see any meaningful change, a la Reddit.

  9. 13

    I think Marcus Ranum is once again making a vast overstatement. From reading down her twitter, a vast amount of the harassment against Melody is person- and situation- specific, tweets that are often legitimate in vocabulary and are not easily distinguishable by a word-processor.

    But then MR is the uberprogrammer and probably produces several dozen such algorithms each day just sitting on the loo, so I guess he’s right.Meh.

    How do you computer-distinguish use of feminist as an insult?

  10. 14

    If I were Twitter, I’d be throwing a range of things at the problem, from bayesian classifiers for the more overt stuff, to some network analysis, various sorts of spikes in usage (hashtags, IPs, mentions, straight non-hashtag word usage), current complaint process — pretty much everything mentioned above. Then, a ruleset for nominating threads for review — the more categories you hit, the higher the spike, the more likely a trained, professional administrator is to review the particular case.

    And that’s the other side, the training and professionalism, and I think that’s the true sticking point. Twitter would have to actually codify some sort of policy that would define harassment, and it would have to involve human judgment at some level — legitimate criticism and harassment can look similar to an uneducated/untrained eye.

    However, there are still issues that would need to be addressed — some harassment situations are as much due to pile-on effect as to the actual content, i.e., does the first tweeter have as much culpability as the fiftieth? (I’m not trying to say these are insurmountable issues — they’re actually easy to address if you’re willing to. They just require Twitter taking the issue seriously and developing community standards that are both enforceable and enforced.)

  11. 15

    I think Marcus Ranum is once again making a vast overstatement.

    Yeah, whatever. Hey do you use gmail? How do you like its spam blocking? Successfully process a whole lotta lotta messages, doesn’t it? Anyhow, the point I was trying to make is that technologically it is quite feasible to take a huge chunk off the top, but Twitter isn’t going to want to do that because they’d be throwing away a lot of traffic as “spam” and it would make everyone confront the fact that there’s spam on twitter. No, really.

    If you have a classification system with a few humans in the feedback loop it’s always more efficient than just a few humans. Spam classifiers run about 99.9% accurate and have the disadvantage of dealing with external data – Twitter’s data would be all internal so they could also do cool stuff like tweak the weightings if an account was just created and/or it was the account’s first tweet. Twitter has never explored techniques for making accounts more valuable (therefore making accounts more painful to lose if you break the TOS) or determining if accounts are likely sockpuppets based on IP address, creation time, posting profile, typing rate, and all kinds of other wonderful internal data that they have but appear not to be using.

    I’ve been working in IT security for 30 years and have consulted on some “big data” systems and fraud detection systems you probably use every week. But, sure, if it makes you happy to think I’m overreaching, be my guest.

  12. 16

    Marcus: My information’s pretty out of date, but the last time I heard news of the team dealing with the customer-facing part of Twitter… Let’s just say that a competition between their tech knowledge and that of MTGOX wouldn’t be an unfair fight.

  13. 17

    The trouble with humans, especially the humans already working for Twitter, is that if it’s done by people steeped in rape culture, and with little practice fighting it, is going to have a lot of false negatives (and even one or two monitors with bro-y sympathies and it’ll be like a few years ago when Amazon suddenly listed every sex-positive or LGBT-friendly book as porn).

  14. 18

    […] That’s way too much power for anybody to have, but particularly a corporation that can easily be influenced by money or political pressure. The bizarreness of their Terms of Service and content policies is a direct result of these kinds of influences. It’s why we end up with a social media universe where posting a dick picture will get your account banned… but conducting a long-running campaign of harassment and intimidation against a feminist activist is jus…. […]

Comments are closed.