Measurements

One of the things I’ve been spending a lot of time thinking about lately is how we measure deliverability. Standard deliverability measurements include: opens, bounces, complaints, and clicks. There are also other tools like probe accounts, panel data, and public blocklists. Taken together these measurements and metrics give us an overall view of how our mail is doing.


A globe with the words innovation, creativity, inspiration and idea written on it.

More and more, though, I see senders meeting all the standard metrics for these measurements, yet still struggling with deliverability. In many ways this isn’t surprising. There are a whole host of tools out there that allow senders to manipulate the underlying metrics without changing their underlying practices. To complicate matters even more, there are tools that manipulate open and click rates by following every link in an email. Finally, we know that some ISPs don’t send 100% of the “this is spam” messages to their FBL. Other metrics, like probe accounts, are inaccurate in an era of personalised delivery based on activity.

All in all, these metrics were built to tell us things about a mail system that no longer exists. Our next challenge is to figure out what metrics to use in the future. How do we monitor the effectiveness of our address collection processes and our deliverability?

One thing I’ve started having customers look at, especially my ESP clients, is how the consumer ISPs are accepting their mail. Are they seeing temp failures and if they are, what specific mailstreams are the temp failures related to? It’s a little early to tell if this is an effective measurement for ESP compliance purposes. It’s definitely helping identify problematic mail streams for my brand clients and allowing us to make adjustments to get to the inbox.

What I do know is that we in the deliverability space need to continue innovating and thinking about how to measure our deliverability. Mail filters are evolving, and we must evolve as well.

Related Posts

How accurate are reports?

One of the big topics of discussion in various deliverability circles is the problems many places are seeing with delivery to Microsoft properties. One of the challenges is that Microsoft seems to be happy with how their filters are working, while senders are seeing vastly different data. I started thinking about reporting, how we generate reports and how do we know the reports are correct.

Read More

What email metrics do you use?

Vertical Response talks about email metrics that are useful on a dashboard.
Metrics are an ongoing challenge for all marketers. The underlying need for metrics is to evaluate how effective a particular marketing program is. Picking metrics involves understanding what the goal is for a particular program. If your goal is brand recognition then perhaps sales and click-through figures aren’t a good metric. If your goal is sales then opens is not as good a metric as average order value or revenue per email.
Measuring email success is important. But how you choose to measure it is a critical decision. Too many marketers just use canned metrics and don’t think about what they really want to know.

Read More

Improving Gmail Delivery

Lately I’m hearing a lot of people talk about delivery problems at Gmail. I’ve written quite a bit about Gmail (Another way Gmail is different, Gmail filtering in a nutshell, Poor delivery at Gmail but no where elseInsight into Gmail filtering) over the last year and a half or so. But those articles all focus on different parts of Gmail delivery and it’s probably time for a summary type post.

Read More