Abuse, triage and data sharing

The recent subscription bombs have started me thinking about how online organizations handle abuse, or don’t as the case may be. Deciding what to address is all about severity. More severe incidents are handled first. Triage is critical, there’s never really enough time or resources to investigate abuse.
biohazardmail
What makes an event severe? The answer is more complicated that one might think. Some of the things that ISP folks look at while triaging incoming complaints include:

  • Type of incident (phishing, spam, hacking, dDOS, criminal activity, etc.)
  • Real world effects (spear phishing, child exploitation, theft, network instability, etc.)
  • Source of complaints (individual reports, trusted reporters, details provided, FBL messages, blocklist notices etc.)
  • Legal issues (subpoenas, search warrants, DMCA complaints)

ISP abuse desks deal with a whole lot more than just spam complaints. Some of it is icky work that involves things most of us should be glad we never have to think about.
In the ESP space, though, triage is different. Typically abuse desks at ESPs monitor for blocking and then monitor complaints about volume. There are fewer problems that employees need to deal with.
For a while now I’ve been slightly concerned that so much of ESP abuse handling is about the volume of complaints and blocking. There is quite a bit of abuse that runs “under the radar” because the numbers just aren’t there. I mean, I get it. It’s almost the only way to handle the sheer volume of complaints that come into an average ESP abuse desk.
But I wonder if we’re missing more subtle forms of abuse, ones that have a high personal impact? The recent subscription bomb has somewhat answered the question. The bomb was unnoticed by most ESPs until Spamhaus started blocking the IPs involved.
The number of victims is small. Most of them are not at mailbox providers that provide FBLs. This got attention because Spamhaus was part of the target. But what if it happens again and Spamhaus addresses aren’t involved? How many ESPs will notice their involvement?
I don’t really have an actual answer. But the abuse is real and the abuse is causing real harm. ESPs measure harm by volume, often without any modifiers for the type of harm. Happily, many of the types of abuse that cause significant harm are done in the shadows and ESPs are out in the open. It’s not the same.
Maybe better communication would help? There are multiple private groups where information is shared about things like this. MAAWG is one example, but there are also lots of ad hoc mailing lists and discussion channels. I’m on a few, I know folks who are on a bunch that I’m not on. There’s a well developed back channel to share information. And because we’re in a security space some of it has to be back channel.
I’m not sure what the answer is. I’m not sure there is one answer. Continuing to develop back channels and networks to share information is clearly part of the answer. But maybe there’s a place for more open sharing of information. The challenge, as always, is sharing with the right people.
Someone asked me on twitter last week if there was a way to get information about mailbox providers having bad days. I didn’t have a good answer – although for things like that I’m much happier to blog and tweet about them. It’s these more complex issues that are harder to share publicly.
So what have I not thought of? What’s your solution?
 

Related Posts

Who pays for spam?

A couple weeks ago, I published a blog post about monetizing the complaint stream. The premise was that ESPs could offer lower base rates for sending if the customer agreed to pay per complaint. The idea came to me while talking with a deliverability expert at a major ESP. One of their potential customer wanted the ESP to allow them to mail purchased lists. The customer even offered to indemnify the ESP and assume all legal risk for mailing purchased lists.
While on the surface this may seem like a generous offer, there aren’t many legal liabilities associated with sending email. Follow a few basic rules that most of us learn in Kindergarten (say your name, stop poking when asked, don’t lie) and there’s no chance you’ll be legally liable for your actions.
Legal liability is not really the concern for most ESPs. The bigger issues for ESPs including overall sending reputation and cost associated with resolving a block. The idea behind monetizing the complaint stream was making the customer bear some of the risk for bad sends. ESP customers do a lot of bad things, up to and including spamming, without having any financial consequences for the behavior. By sharing  in the non-legal consequences of spamming, the customer may feel some of the effect of their bad decisions.
Right now, ESPs really protect customers from consequences. The ESP pays for the compliance team. The ESP handles negotiations with ISPs and filtering companies. The cost of this is partially built into the sending pricing, but if there is a big problem, the ESP ends up shouldering the bulk of the resolution costs. In some cases, the ESP even loses revenue as they disconnect the sender.
ESPs hide the cost of bad decisions from customers and do not incentivize customers to make good decisions. Maybe if they started making customers shoulder some of the financial liability for spamming there’d be less spamming.

Read More

January 2016: The Month in Email

Jan2016_blogHappy 2016! We started off the year with a few different “predictions” posts. As always, I don’t expect to be right about everything, but it’s a useful exercise for us to look forward and think about where things are headed.
I joined nine other email experts for a Sparkpost webinar on 2016 predictions, which was a lot of fun (see my wrap up post here), and then I wrote a long post about security and authentication, which I think will be THE major topic in email this year both in policy and in practice (see my post about an exploit involving Trend Micro and another about hijacked Verizon addresses). Expect to hear more about this 2016 continues.
My other exciting January project was the launch of my “Ask Laura” column, which I hope will prove a great resource for people with questions about email. Please let me know if you have any questions you’d like to see me answer for your company or your clients — I’ll obscure any identifying information and generalize the answers to be most widely applicable for our readers.
In other industry news, it’s worth noting that Germany has ruled it illegal to harvest users’ address books (as Facebook and other services do). Why does that make sense? Because we’re seeing more and more phishing and scams that rely on social engineering.
In best practices, I wrote about triggered and transactional emails, how they differ, and what to consider when implementing them as part of your email program. Steve describes an easy-to-implement best practice that marketers often ignore: craft your mails so the most important information is shown as text.
I re-published an older post about SMTP rules that has a configuration checklist you might find useful as you troubleshoot any issues. And a newer issue you might be seeing is port25 blocking, which is important if you are hosting your own email senders or using SMTP to send to your ESP.
Finally, I put together some thoughts about reporting abuse. We work closely with high-volume abuse desks who use our Abacus software, and we know that it’s often not worth the time for an individual to report an incident – but I still think it’s worthwhile to have the infrastructure in place, and I wrote about why that is.

Read More

Peeple, Security and why hiding reviews doesn't matter

There’s been a lot of discussion about the Peeple app, which lets random individuals provide reviews of other people. The founders of the company seem to believe that no one is ever mean on the Internet and that all reviews are accurate. They’ve tried to assure us that no negative reviews will be published for unregistered users. They’re almost charming in their naivety, and it might be funny if this wasn’t so serious.
The app is an invitation to online abuse and harassment. And based on the public comments I’ve seen from the founders they have no idea what kind of pain their app is going to cause. They just don’t seem to have any idea of the amount of abuse that happens on the Internet. We work with and provide tools to abuse and security desks. The amount of stuff that happens as just background online is pretty bad. Even worse are the attacks that end up driving people, usually women, into hiding.
The Peeple solution to negative reviews is two fold.

Read More