Delivery Metrics

Last week ReturnPath published a study that shows 20% of permission based email fails to be delivered to the inbox. For this study, ReturnPath looked at the mail sent by their mailbox monitor customers and counted the number of deliveries to the inbox, the number of deliveries to the bulk folder and the number of emails that were not delivered.
At US ISPs 21% of the permission based emails sent to the ReturnPath probe network did not make it to the inbox. 3% of the emails sent went to the bulk folder and 17% did not make it to the mailbox at all.  MSN/Hotmail and Gmail were the worst ISPs to get mail to. They each failed to deliver more than 20% of the mail that was sent to them. At Canadian ISPs, even less of the mail made it to the inbox, primarily because primus.ca is such a large portion of the Canadian market and they use Postini as a filter. Postini is a quite aggressive filter and takes no feedback from senders.
ReturnPath’s take home message on the survey is that one set of metrics is not enough to effectively evaluate a marketing program. Senders need to know more about their mailings than they can discover from just the bounce rate or the revenue rate or response rate or open rate.
There are a lot of reasons an email doesn’t get to the recipient’s inbox or bulk folder. Mail can be hard blocked at the MTA, and rejected by the ISP outright. Mail can be soft blocked at the MTA and the ISP can slow down sending. Sometimes this is enough to cause the sending MTA to stop attempting to deliver the mail, thus causing mail to not show up. Both of these types of blocks are usually visible when looking at the bounce rate.
Some ISPs accept mail but then fail to deliver it to the recipient. Everything on the sender end says the ISP accepted it for delivery but the ISP just drops it on the floor. This is the type of block that a mailbox monitoring program is best able to identify.
Despite all the discussions of numbers, many marketers are still not measuring the variables in their email campaigns. Ken Magill wrote today about a study released by eROI that indicates more than a third of marketers are not doing any testing on their mailings.
Now, both of these studies are done in an attempt to sell products, however, the numbers discussed should be making smart senders think about what they are measuring in regards to their email campaign, how they are measuring those factors and what the measurements mean.

Related Posts

The Weekend Effect

Sending mail only Monday through Friday can cause reputation and delivery problems at some ISPs, even when senders are doing everything right. This “weekend effect” is a consequence of how ISPs measure reputation over time.
Most ISPs calculating complaint rate use a simple calculation. They measure how many “this is spam” clicks a source IP generates in a 24 hour period. Then they divide that number by how many emails were delivered to the inbox in the same 24 hour period.
The weekend effect happens when a sender sends on weekdays and not on the weekend thus lowering the number of emails delivered to the to the inbox. Recipients, however, still read mail on the weekend, and they still hit the “this is spam” button on the email. Even if the number of “this is spam” clicks is lower than a normal weekday, with no incoming email the rate of spam complaints goes above ISP thresholds. Even a very well run mailing program may see spikes in complaint rate on the weekends.
Now, when the ISPs are measuring complaint rates over time, they take the average of the average complaint rates. If the rates spike high enough on the weekend (and they can spike to the 1 – 3% range, even for a well run list), that can hurt the senders’ reputation.
The good news is that ISPs are aware of the weekend effect and take this into account when manually looking at complaints. The bad news is that not all of the major ISPs take this into account when programatically calculating reputation.
There isn’t very much senders can do to combat the weekend effect, except be aware this can happen and may be responsible for poor mailing performances on Monday. If you are seeing delivery problems you think may be a result of the weekend effect you can contact the ISPs and ask for manual review of your reputation. Some ISPs can provide manual mitigation for senders with otherwise clean stats. d

Read More

Reputation as measured by the ISPs

Part 3 in an ongoing series on campaign stats and measurements. In this installment, I will look a little closer at what other people are measuring about your email and how that affects your reputation at the ISPs.
Part 1: Campaign Stats and Measurements
Part 2: Measuring Open Rate
Reputation at the ISPs is an overall measure of how responsive recipients are to your email. ISPs also look at how much valid email you are sending. Anything the ISP can measure and use to distinguish good mail from bad is used in calculating reputation.
Some of the major metrics ISPs use include the following.
Invalid Address Rates
The ISPs count how much mail from any particular IP address is hitting non-existent addresses. If you are mailing a large number of email addresses that do not exist (550 user unknown), this is a suggestion that your address collection techniques are not very good. Responsible mailers do have the occasional bad address, including typos, expired/abandoned addresses, but the percentage in comparison to the number of real email addresses is low. How low is low? Public numbers suggest problems start at 10% user unknowns, but conversations with ISP employees show they consider lower levels a hint there may be a problem.
To calculate bounce rate ISPs take the total number of addresses that were for invalid accounts and divide that by the total number of addresses that the sender attempted to send mail to. Rates above 10% may cause significant delivery issues on their own, rates lower that 10% may still contribute to poor delivery through poor reputation scores.
Spamtraps
ISPs pay a lot of attention to how much mail is hitting their “trap” or “bait” accounts. There are a number of different sources of these trap accounts: old abandoned email addresses, addresses that never existed or even role accounts. Hits to a trap account tells the ISP there are addresses on your list that did not opt-in to receive mail. And if there are some addresses they know about that did not opt-in, it is likely that there are other addresses that did not opt in.
Spamtraps tend to be treated as an absolute number, not as a percentage of emails. Even a single spamtrap on a list can significantly harm delivery. According to the ReturnPath Benchmark report lists with a single spamtrap had nearly 20% worse delivery than lists without spamtraps.
This is spam clicks (FBL complaints)
Complaints from users are heavily used by ISPs. This tells them directly how many people are objecting to your email. In this case, permission is removed from the equation. Even if a sender has permission to send email, the recipient can say “no, I don’t want this, it is spam.” The ISPs put more weight on what their users tell them than on what the senders tell them.

Read More

ReturnPath customers?

Someone posted the following question about ReturnPath in the comments:

Read More