Delivery Metrics

Last week ReturnPath published a study that shows 20% of permission based email fails to be delivered to the inbox. For this study, ReturnPath looked at the mail sent by their mailbox monitor customers and counted the number of deliveries to the inbox, the number of deliveries to the bulk folder and the number of emails that were not delivered.
At US ISPs 21% of the permission based emails sent to the ReturnPath probe network did not make it to the inbox. 3% of the emails sent went to the bulk folder and 17% did not make it to the mailbox at all.  MSN/Hotmail and Gmail were the worst ISPs to get mail to. They each failed to deliver more than 20% of the mail that was sent to them. At Canadian ISPs, even less of the mail made it to the inbox, primarily because primus.ca is such a large portion of the Canadian market and they use Postini as a filter. Postini is a quite aggressive filter and takes no feedback from senders.
ReturnPath’s take home message on the survey is that one set of metrics is not enough to effectively evaluate a marketing program. Senders need to know more about their mailings than they can discover from just the bounce rate or the revenue rate or response rate or open rate.
There are a lot of reasons an email doesn’t get to the recipient’s inbox or bulk folder. Mail can be hard blocked at the MTA, and rejected by the ISP outright. Mail can be soft blocked at the MTA and the ISP can slow down sending. Sometimes this is enough to cause the sending MTA to stop attempting to deliver the mail, thus causing mail to not show up. Both of these types of blocks are usually visible when looking at the bounce rate.
Some ISPs accept mail but then fail to deliver it to the recipient. Everything on the sender end says the ISP accepted it for delivery but the ISP just drops it on the floor. This is the type of block that a mailbox monitoring program is best able to identify.
Despite all the discussions of numbers, many marketers are still not measuring the variables in their email campaigns. Ken Magill wrote today about a study released by eROI that indicates more than a third of marketers are not doing any testing on their mailings.
Now, both of these studies are done in an attempt to sell products, however, the numbers discussed should be making smart senders think about what they are measuring in regards to their email campaign, how they are measuring those factors and what the measurements mean.

Related Posts

Measuring open rate

In this part of my series on Campaign Stats and Measurements I will be examining open rates, how they are used, where they fail and how the can be effectively used.
There has been an lot written about open rates recently, but there are two posts that stand out to me. One was the EEC’s post on renaming open rate to render rate and Mark Brownlow’s excellent post on what open rate does and does not measure. I’ve also weighed in on the subject.
Overall, I find open rates to be a very frustrating metric. Some senders, particularly those relatively new to email marketing, are so sure they know what open rate is and what it means, that they don’t take any time to actually understand the number. While the name “open rate” seems self explanatory, it’s actually not. Open rate is actually not a measure of how many recipients open an email. However, there are times where open rate is a useful metric for measuring a marketing program over time.
What is an open?
If asked, most people will tell you that open rate is the number of emails that were opened by the recipients. The problem is that this isn’t actually true. An open is counted when a tagged image in an email is rendered by the recipient’s email client. Not all mail clients render images by default, but the emails are still available for the recipient to read. If a user clicks on a link in an email that has not had an image rendered, some ESPs count that as an open as well as a click. In other cases, visiting a link in an email with no image rendered is just a click, no open is recorded.
What is the open rate?
Open rate is generally the percentage of email opens divided by some number representing the number of emails sent. Many senders use the number of emails sent minus the number of bounced emails, others use just the number of emails sent without factoring in the number of emails bounced.
Open rate is a secondary metric. While it does not measure the success, or failure, of a campaign directly, it can be used as a indicator for campaigns. Many people use open rate as a metric because it’s easy to measure. Direct metrics, such as clicks or average purchase or total purchase, may take days or even weeks to collect and analyze. Open rates can be calculated quickly and easily.
What the open rate isn’t
Open rate is not a measure of how many people opened a mail. It is not a measure of how many people read a mail. It really only records that an image in a particular email is loaded and, sometimes, that a link was clicked on. Open rates can be wildly different depending on how the sender measures opens and how the sender measures sends.
What senders use open rates for
To compare their open rates with industry averages
As I talked about above, this use of open rates is problematic at best. You cannot compare numbers, even when they have the same name, if the numbers were arrived at using different calculations. Open rate is not open rate and unless you know the underlying algorithm used you cannot compare two open rates. This is a poor use of open rate.
As a metric for advertising rates
Since a sender can manipulate the open rate by using different calculation methods, this is a good metric for the advertiser to use. It is not so great for the purchaser though, who is at the mercy of the sender’s metrics. There are contractual ways a purchaser can protect herself from an unscrupulous marketer, but only if she understands how open rate can be manipulated and takes steps to define what open rate is in use.
To judge the success of campaigns over time
A single open data point doesn’t mean very much, however, using consistently measured open rates a sender can measure trends. Open trends over time are one area that open rates can help senders judge the success, or failure, of a marketing campaign.
As one metric in A/B testing
Comparing open rates in A/B testing gives some indication of which campaigns recipients may be more interested in. As with trends over time, the lone measurement isn’t useful, but as a comparative metric, it may provide senders with insight into a particular mailing.
To judge the engagement of recipients
Over the long term, recipients who do not interact with a mailing become dead weight on the list. Too many non responders can hurt a sender’s reputation at an ISP. List hygiene, in the form of removing people who never open or click on an email, is an important part of reputation management.
As metrics for email campaigns go, open rate is limited in what it measures about an email campaign. However, as a quick way to measure trending or do head to head comparisons it is a useful metric.

Read More

Modifying RP managed FBLs

I was recently pointed out the FBL support pages for those feedback loops hosted by ReturnPath. Clicking around, they have the framework and the beginnings of a good source of information for their services. You can also open support tickets for questions and services that are not covered in their knowledge base.

Read More

Reputation as measured by the ISPs

Part 3 in an ongoing series on campaign stats and measurements. In this installment, I will look a little closer at what other people are measuring about your email and how that affects your reputation at the ISPs.
Part 1: Campaign Stats and Measurements
Part 2: Measuring Open Rate
Reputation at the ISPs is an overall measure of how responsive recipients are to your email. ISPs also look at how much valid email you are sending. Anything the ISP can measure and use to distinguish good mail from bad is used in calculating reputation.
Some of the major metrics ISPs use include the following.
Invalid Address Rates
The ISPs count how much mail from any particular IP address is hitting non-existent addresses. If you are mailing a large number of email addresses that do not exist (550 user unknown), this is a suggestion that your address collection techniques are not very good. Responsible mailers do have the occasional bad address, including typos, expired/abandoned addresses, but the percentage in comparison to the number of real email addresses is low. How low is low? Public numbers suggest problems start at 10% user unknowns, but conversations with ISP employees show they consider lower levels a hint there may be a problem.
To calculate bounce rate ISPs take the total number of addresses that were for invalid accounts and divide that by the total number of addresses that the sender attempted to send mail to. Rates above 10% may cause significant delivery issues on their own, rates lower that 10% may still contribute to poor delivery through poor reputation scores.
Spamtraps
ISPs pay a lot of attention to how much mail is hitting their “trap” or “bait” accounts. There are a number of different sources of these trap accounts: old abandoned email addresses, addresses that never existed or even role accounts. Hits to a trap account tells the ISP there are addresses on your list that did not opt-in to receive mail. And if there are some addresses they know about that did not opt-in, it is likely that there are other addresses that did not opt in.
Spamtraps tend to be treated as an absolute number, not as a percentage of emails. Even a single spamtrap on a list can significantly harm delivery. According to the ReturnPath Benchmark report lists with a single spamtrap had nearly 20% worse delivery than lists without spamtraps.
This is spam clicks (FBL complaints)
Complaints from users are heavily used by ISPs. This tells them directly how many people are objecting to your email. In this case, permission is removed from the equation. Even if a sender has permission to send email, the recipient can say “no, I don’t want this, it is spam.” The ISPs put more weight on what their users tell them than on what the senders tell them.

Read More