Online communities and abuse

A few weekends ago we met a friend for coffee in Palo Alto. As the discussion wandered we ended up talking about some of the projects we’re involved in. Friend mentioned she was working with a group building a platform for community building. We started talking about how hard it is these days to run online groups and communities. One of the things I started discussing was what needed to be built into communities like this to prevent abuse and damage.


It’s a sad fact of online life that trolls exist and have been a part of online life since before Usenet. My perception is this is getting worse. It’s not that there wasn’t harassment in the past. There was. 20 years ago, I managed to annoy some random woman on a newsgroup back in ’96 or ’97. This resulted in months of harassing phone calls to me at home and work, my boss at home and work, the head of the rescue group I volunteered with. The police were involved, but there wasn’t much they could do. There’s still not much police do about online threats.

Now it seems worse. People are getting physically threatened. Women and activists are driven from their homes because someone online decided to attack / doxx / frighten them. We have online platforms that allow hate speech and threats and don’t provide sufficient tools for users to protect themselves. For all the good that comes from the Internet, there’s an awful lot of bad.

A big part of the issue is anonymity. Real anonymity online is hard, as evidenced by how quickly CNN tracked down the real life identity of a Reddit user. They did that in less than 24 hours, without the benefit of any private information. But partial anonymity is pretty easy. It’s trivial for anyone to register any number of twitter accounts, or reddit accounts. I recently heard the term “weaponized anonymity” and it accurately describes the situation. (I don’t agree with all of the opinions in that article, but I think the definition is useful.)
Before my harasser, I was pretty open online with where I worked and volunteered. I think I even had my physical location (at least city and state) on my webpage. Afterwards, I stripped as much info from the space I had control over. I thought about creating a new online identity, but decided that it was both a lot of work and wouldn’t be that effective. It’s near impossible to hide online now.

These are issues we have to address. Unfortunately, too many community platforms (twitter, I’m looking at you) don’t have controls in place to allow users to block harassment. At the volume of users some online communities have there is simply no way to put a human in the loop to deal with every complaint. There’s also a ‘x said, y said’ problem, where abusers claim they’re the victim when called on their behavior. The Mary Sue has an article on a recent example. In some cases, harassment goes back for years and the story is too complicated for an abuse desk worker to absorb in the short time they have to deal with an issue.

I certainly don’t have the answers. But I know that when we’re building online software we have to start prioritizing user safety and privacy. Too many online spaces don’t have walls or fences or locks. That’s a good thing because it lets people find communities. But it is a bad thing because there are folks out there who disrupt communities as a hobby. Anyone building community software needs to think how they and their software will handle it if one of their users is targeted.
These are discussions that need to happen. Those of us with experience in the online abuse space need to be involved and contribute where we can.

Related Posts

Do you run spam filters?

Jan Schaumann is putting together a talk on ethics in as related to folks managing internet operations. He has a survey and is looking for folks who wrangle the machines that run the internet. I’m copying his post, with permission, due to a slightly NSFW image on his announcement.

Read More

Do system administrators have too much power?

Yesterday, Laura brought a thread from last week to my attention, and the old-school ISP admin and mail geek in me felt the need to jump up and say something in response to Paul’s comment. My text here is all my own, and is based upon personal experience as well as those of my friends. That said, I’m not speaking on their behalf, either. 🙂
I found Paul’s use of the word ‘SysAdmin’ to be a mighty wide (and — in my experience — probably incorrect) brush to be painting with, particularly when referring to operations at ISPs with any significant number of mailboxes. My fundamental opposition to use of the term comes down to this: It’s no longer 1998.
The sort of rogue (or perhaps ‘maverick’) behavior to which you refer absolutely used to be a thing, back when a clean 56k dial-up connection was the stuff of dreams and any ISP that had gone through the trouble to figure out how to get past the 64k user limit in the UNIX password file was considered both large and technically competent. Outside of a few edge cases, I don’t know many system administrators these days who are able to (whether by policy or by access controls) — much less want to — make such unilateral deliverability decisions.
While specialization may be for insects, it’s also inevitable whenever a system grows past a certain point. When I started in the field, there were entire ISPs that were one-man shows (at least on the technical side). This simply doesn’t scale. Eventually, you start breaking things up into departments, then into services, then teams assigned to services, then parts of services assigned to teams, and back up the other side of the mountain, until you end up with a whole department whose job it is to run one component of one service.
For instance, let’s take inbound (just inbound) email. It’s not uncommon for a large ISP to have several technical teams responsible for the processing of mail being sent to their users:

Read More

Peeple, Security and why hiding reviews doesn't matter

There’s been a lot of discussion about the Peeple app, which lets random individuals provide reviews of other people. The founders of the company seem to believe that no one is ever mean on the Internet and that all reviews are accurate. They’ve tried to assure us that no negative reviews will be published for unregistered users. They’re almost charming in their naivety, and it might be funny if this wasn’t so serious.
The app is an invitation to online abuse and harassment. And based on the public comments I’ve seen from the founders they have no idea what kind of pain their app is going to cause. They just don’t seem to have any idea of the amount of abuse that happens on the Internet. We work with and provide tools to abuse and security desks. The amount of stuff that happens as just background online is pretty bad. Even worse are the attacks that end up driving people, usually women, into hiding.
The Peeple solution to negative reviews is two fold.

Read More