Why Peeple Won't Save Us From Jerks

I wrote about Peeple again for the Daily Dot, but from a slightly different angle than my other piece.

Peeple, a new app for rating people like as if they were restaurants on Yelp, hasprovoked so much criticism and anger online that its creators, Julia Cordray and Nicole McCullough, have shut down their Twitter and Facebook pages. The app, which is flawed in more ways than it isn’t, is still supposed to be released in November—even despite the death threats that the creators have reportedly received.

I wish I could say that I’m stating the obvious, but sending McCullough and Cordray death threats is not OK. It’s never OK. And although some are gloating over the fact that getting harassed might teach them what the Internet is really like, I still wish that were a lesson they could’ve avoided.

One potential upside is that the app may be getting some changes. Although the creators are making bold statements like “We will not be shamed into submission,” it seems they may have listened to their critics at least a little and made the app opt-in. However, this was not framed as a change. The creators never said that they were responding to criticism and updating the app. In a LinkedIn post, they simply stated that it’s an opt-in app, even though a week ago they explicitly said that it wasn’t. Are they hoping we don’t notice?

Even if Peeple undergoes some much-needed changes, I still haven’t seen anything from the creators about how specifically they intend to address abuse, harassment, and bullying on their app—because it will happen, opt-in or not. What creators of Peeple should learn is that you can’t engineer an asshole-free world. And if you try, the assholes will make sure that it hurts innocent folks much more.

Developers who believe that their apps will be free from abuse are laughably naive. Even apps that in theory have codes of conduct, moderators, and procedures for reporting abusive users struggle mightily with this problem. On Facebook, public pages intended to harass and bully others proliferate. On Twitter, harassers and stalkers use multiple sock puppet accounts to gang up on people they don’t like(especially women and people of color) and drive them off of the platform and sometimes out of public life.

Storify has been used to stalk users (including those who don’t use Storify themselves) by pinging them with notifications that someone they know to be unsafe and threatening is collecting and saving their tweets. On Ask.fm, a site for people to anonymously ask each other questions, teens flood their targets’ inboxes with bullying messages, in some cases leading to suicide. On Reddit, even subreddits dedicated to creating a supportive space get inundated with abusive trolls. YouTubecomments… well, the less said about that, the better.

It might be, thus, tempting to throw one’s hands up and proclaim that there’s nothing wrong with Peeple because the Internet’s already full of abuse and stalking and harassment—so who cares, right?

But the difference between Peeple and all those other apps is that they all have a purpose besides judging and evaluating people. Those apps have facilitated social change and activism, helped people learn new things and stay informed, provided art and entertainment, and created friendships and relationships.

Peeple does, in theory, have a constructive purpose—complimenting people and making sure that you’re surrounding yourself with good ones—but there are already better ways to do that that don’t involve nearly so much potential harm (especially to children or marginalized people like abuse survivors). When creating new technology, it’s important to ask yourself if the benefits actually outweigh the costs. While Peeple probably has some pros, the cons are just too overwhelming.

Read the rest here.

{advertisement}
Why Peeple Won't Save Us From Jerks
{advertisement}
The Orbit is (STILL!) a defendant in a SLAPP suit! Help defend freedom of speech, click here to find out more and donate!

7 thoughts on “Why Peeple Won't Save Us From Jerks

  1. 1

    I wouldn’t say it’s impossible to set up a way for people to interact online that doesn’t have the potential for abuse. LinkedIn seems to avoid it fairly well. And my daughter used to belong to several sites like Disney’s Club Penguin that were designed to be safe spaces for kids.

    Whenever anybody makes a new app, they should bring in a reformed troll and ask “How would you abuse this?” (just like you bring in reformed hackers to test software), and bring in someone who has been victimized and ask “How could this have been used maliciously against you?”

  2. 2

    With LinkedIn, I think a reason is that it’s pretty much for professional/employment related matters alone, and is pretty limited in terms of the ways you can communicate. It’s sort of how I have found that I have the option to LIKE someone’s new job, and a comment, but I haven’t seen people leave much in the way of negative remarks.

    Maybe the employment related nature of LinkedIn is the reason why, because being a person who leaves nasty, negative remarks might not look good to potential employers? Not that potential employers can’t be nasty or looking for people with nasty traits, but being so open about it might be seen even by those as careless. And possibly people are more likely to harass Jane Doe the person (from Facebook) than they are Jane Doe, Accountant at Firm X? Bullies are probably more likely to want to hurt people who open up in a personal way, kind of how trolls tend to invade spaces meant to be safe simply because they’re looking to inflict maximum damage.

    Though now all the LinkedIn makes me feel kind of down that the pragmatic, professional employment focused social network is the one that’s better in terms of lower amounts of online abuse and harassment.

  3. 3

    I’m trying to figure out a way to fix this that doesn’t wreck the app’s core function, and I don’t think it’s possible.

    Require that any “review” be approved by the subject? Not only does that still allow “kill yourself”-type abuse, it makes the “reviews” worthless since people can reject anything negative.

    “Opt-in”? Bad actors are going to opt out, leading to pressure to opt in. Great from a business perspective, not so great from a “preventing abuse” perspective.

    Require “reviewers” to put some of their own reputation on the line when making a “review”? Solves individual abusers, but incentivizes harassment by cliques.

    Require “reviewers” to pay when making a “review”? Disincentivizes creating reviews.

    The only really useful thing I can see that this app can do is giving warnings about people, like “don’t be in a room alone with him” or “don’t lend money to her”, and you can’t do that while not allowing negativity.

    1. 3.1

      On the warnings. Part of me says that I really with there was an app for that. But I worry that the potential for abuse would just be too high, and that (usually) when I’ve been warned about someone it’s by people in my social circle.

      And how much stock would a person put in a ‘don’t be in a room alone with/lend money/go drinking with [insert name here]’ from someone they don’t know posted online? Most people, however bad they may be, will probably seem normal if not polite or even charming in real life. Enough that pretty serious warnings from friends people know in real life don’t always deter them.

      And you pretty much sum up all the problems with any attempt at regulating abuse. Even if you could reject all negative reviews you’d still have to read the negative ones. Then there’s the concern that the profit motive could start changing the model. What if, instead of having to pay to make a review, they decide there’s more money to be made in demanding that people pay money to *remove* a review about them?

  4. 4

    I have this nagging feeling that this app will float like a block of concrete in Europe where privacy laws and slander laws are much tighter than in the USA. I mean, in Europe you got the right to have results removed from Google, so it’s pretty easy to imagine that somebody can sue Peeble and make them remove things (and legal costs aren’t as prohibiting as in the USA).

  5. 5

    I don’t really care about this app one way or another. I’m not signing up for it, so it doesn’t affect me. What’s someone going to do, create a page FOR me? I’ve had people do that with facebook and it never did me any harm. I just ignore it. I’m not sure I want to know the sort of person who’s basing their opinion of me on some random internet reviews anyway, so if anything it works as a good self-operating filter before I even talk to someone.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.