Standard Email Metrics

The EEC has been working on standardizing metrics used in email marketing. They have published a set of definitions for different terms many email marketers use. They published their Support the Adoption of Email Metrics (S.A.M.E) guide in June.
Under the new EEC definitions an open is measured when either a tracking pixel is displayed or a user clicks on any link in the email, including the unsubscribe link. Open rate is defined as the number of opens (either unique or total) divided by the number of accepted emails. Accepted emails equals the number of emails sent minus the number of emails rejected by the ISP for any reason.
The authors do caution, however, that even their measurements may under count the number of email subscribers that actually open or read an email. Some readers don’t load images or click on links but happily read and digest the content being sent. Others may not click on a link but actually visit a website or brick and mortar store to purchase something based on the email.
Overall, I think the definitions created by the S.A.M.E. group accurately reflect the things they want to measure within the limits of what is actually measurable. Their definitions won’t affect conversations in the short term, but are likely to drive change to standard terminology over the longer term. I do strongly encourage people to grab a copy of their document and see how their definitions compare with your current measurements.

Related Posts

This is why the ISPs throw up their hands at senders

I recently saw a question from an ESP rep asking if anyone had a personal contact at a particular ISP. The problem was that they had a rejection from the ISP saying: 571 5.7.1 too many recipients this session. The ESP was looking for someone at the ISP in order to ask what the problem was.
This is exactly the kind of behaviour that drives ISPs bonkers about senders. The ISP has sent a perfectly understandable rejection: “5.7.1: too many recipients this session.” And instead of spending some time and energy on the sender side troubleshooting, instead of spending some of their own money to work out what’s going on, they fall back on asking the ISPs to explain what they should do differently.
What, exactly, should you do differently? Stop sending so many recipients in a single session. This is not rocket science. The ISP tells you exactly what you need to do differently, and your first reaction is to attempt to mail postmaster@ the ISP and then, when that bounces, your next step is to look for a personal contact?
No. No. No.
Look, connections and addresses per connections is one of the absolute easiest things to troubleshoot. Fire up a shell, telnet to port 25 on the recipient server, and do a hand SMTP session, count the number of receipts. Sure, in some corporate situations it can be a PITA to do, sometimes you’re going to need to get it done from a particular IP which may be an interface on an appliance and doesn’t have telnet or whatever. But, y’know what? That Is Your Job.  If your company isn’t able to do it, well, please tell me so I can stop recommending that as an ESP. Companies have to be able to test and troubleshoot their own networks.
Senders have been begging ISPs for years “just tell us what you want and we’ll bother you less.” In this case the ISP was extremely clear about what they want: they want fewer recipients per connection. But the ESP delivery person is still looking for a contact so they can talk to the ISP to understand it better.
This is why the ISPs get so annoyed with senders. They’re tired of having to do the sender’s job.

Read More

Standardizing email metrics

Slogging towards e-mail metrics standardization a report by Direct Mag on the efforts of the Email Experience Council to standardize definitions related to email marketing.

Read More

Delivery Metrics

Last week ReturnPath published a study that shows 20% of permission based email fails to be delivered to the inbox. For this study, ReturnPath looked at the mail sent by their mailbox monitor customers and counted the number of deliveries to the inbox, the number of deliveries to the bulk folder and the number of emails that were not delivered.
At US ISPs 21% of the permission based emails sent to the ReturnPath probe network did not make it to the inbox. 3% of the emails sent went to the bulk folder and 17% did not make it to the mailbox at all.  MSN/Hotmail and Gmail were the worst ISPs to get mail to. They each failed to deliver more than 20% of the mail that was sent to them. At Canadian ISPs, even less of the mail made it to the inbox, primarily because primus.ca is such a large portion of the Canadian market and they use Postini as a filter. Postini is a quite aggressive filter and takes no feedback from senders.
ReturnPath’s take home message on the survey is that one set of metrics is not enough to effectively evaluate a marketing program. Senders need to know more about their mailings than they can discover from just the bounce rate or the revenue rate or response rate or open rate.
There are a lot of reasons an email doesn’t get to the recipient’s inbox or bulk folder. Mail can be hard blocked at the MTA, and rejected by the ISP outright. Mail can be soft blocked at the MTA and the ISP can slow down sending. Sometimes this is enough to cause the sending MTA to stop attempting to deliver the mail, thus causing mail to not show up. Both of these types of blocks are usually visible when looking at the bounce rate.
Some ISPs accept mail but then fail to deliver it to the recipient. Everything on the sender end says the ISP accepted it for delivery but the ISP just drops it on the floor. This is the type of block that a mailbox monitoring program is best able to identify.
Despite all the discussions of numbers, many marketers are still not measuring the variables in their email campaigns. Ken Magill wrote today about a study released by eROI that indicates more than a third of marketers are not doing any testing on their mailings.
Now, both of these studies are done in an attempt to sell products, however, the numbers discussed should be making smart senders think about what they are measuring in regards to their email campaign, how they are measuring those factors and what the measurements mean.

Read More