Are seed lists still relevant?

Those of you who have seen some of my talks have seen this model of email delivery before. The concept is that there are a host of factors that contribute to the reputation of a particular email, but that at many ISPs the email reputation is only one factor in email delivery. Recipient preferences drive whether an email ends up in the bulk folder or the inbox.

The individual recipient preferences can be explicit or implicit. Users who add a sender to their address book, or block a sender, or create a specific filter for an email are stating an explicit preference. Additionally, ISPs monitor some user behavior to determine how wanted an email is. A recipient who moves an email from the bulk folder to the inbox is stating a preference. A person who hits “this-is-spam” is stating a preference. Other actions are also measured to give a user specific reputation for a mail.
Seed accounts aren’t like normal accounts. They don’t send mail ever. They only download it. They don’t ever dig anything out of the junk folder, they never hit this is spam. They are different than a user account – and ISPs can track this.
This tells us we have to take inbox monitoring tools with a grain of salt. I believe, though, they’re still valuable tools in the deliverability arsenal. The best use of these tools is monitoring for changes. If seed lists show less than 100% inbox, but response rates are good, then it’s unlikely the seed boxes are correctly reporting delivery to actual recipients. But if seed lists show 100% inbox and then change and go down, then that’s the time to start looking harder at the overall program.
The other time seed lists are useful is when troubleshooting delivery. It’s nice to be able to see if changes are making a difference in delivery. Again, the results aren’t 100% accurate but they are the best we have right now.
 

Related Posts

Delivery and engagement

Tomorrow is the webinar Mythbusters: Deliverability vs. Engagement. This webinar brings together the ISP speakers from EEC15, plus Matt from Comcast, to expand on their comments. There’s been some confusion about the impact of engagement on delivery and whether or not senders should care about recipient engagement.
My opinion on the matter is well known: recipient engagement drives delivery to the inbox at some providers. I expect tomorrow we’ll hear a couple things from the ISPs.

Read More

Updated M3AAWG Best Practices for Senders

M3AAWG has published a new version of the Senders Best Common Practices document and the contains a lot of new information since the original publication in 2008. The new document covers how to vet ESP customers, considerations when selecting a dedicated or share IP to send mail, and includes best practices on a number of technical processes.
The Senders Best Common Practices document is targeted at deliverability teams and email marketers. Any company that is sending marketing emails, using an Email Service Provider, or provides an email enabled platform, it’s always good to go back and periodically review your system to ensure nothing was missed and to stay up-to-date on all new recommendations.
A few of the recommendations include the use of the List-Unsubscribe header, publishing a clear WHOIS for domains used for sending mail, and how to process non-delivery report messages.
The List-Unsubscribe header provides an additional way for users to opt-out of email messages. Gmail and Outlook.com both use the presence of the list-unsubscribe header to provide a one-click button to allow the user to unsubscribe from the mailing list. Often enough, if a user cannot find an opt-out link, they’re marking the message as spam. Allowing a recipient to unsubscribe easily is critical to maintaining good delivery reputation.
A WHOIS is query to determine who is the registered user or assignee of a domain name. During a session at the most recent M3AAWG meeting, it was announced that spammers throw away 19 million domains per year. When a postmaster or abuse desk receive a complaint, they’ll often query to see who owns the domain the email was sent from or who owns the domains used in the hyperlinks. If the WHOIS record is out of date or set to private, this limits the ability for the postmaster or abuse desk to reach out to the owner of the domain.
Processing non-deliver reports is critical to maintaining a high delivery reputation. Many ESPs have an acceptable-use-policy that includes a bounce rate. Mailjet recommends a bounce rate of less than 8% and Mandrill recommends less than 5%. If a system is not in place to remove the hard bounces from your mailing list, the sender’s reputation will quickly deteriorate.
The Senders Best Common Practices document can be downloaded at M3AAWG.org.
 

Read More

Public reputation data

IP based reputation is a measure of the quality of the mail coming from a particular IP address. Because of how reputation data is collected and evaluated it is difficult for third parties to provide a reputation score for a particular IP address. The data has to be collected in real time, or as close to real time as possible. Reputation is also very specific to the source of the data. I have seen cases where a client has a high reputation at one ISP and a low reputation at another.
All this means is that there are a limited number of public sources of reputation data. Some ISPs provide ways that senders can check reputation at that ISP. But if a sender wants to check a broader reputation across multiple ISPs where can they go?
There are multiple public sources of data that I use to check reputation of client IP addresses.
Blocklists provide negative reputation data for IP addresses and domain names. There are a wide range of blocklists with differing listing criteria and different levels of trust in the industry. Generally the more widely used a list the more accurate and relevant it is. Generally I check the Spamhaus lists and URIBL/SURBL when investigating a client. I find these lists are good sources for discovering real issues or problems.
For an overall view into the reputation of an IP address, both positive and negative, I check with senderbase.org provided by Ironport and senderscore.org provided by ReturnPath.
All reputation sources have limitations. The primary limitation is they are only as good as their source data, and their source data is kept confidential. Another major limitation is reputation sources are only as good as the reputation of the maintainer. If the maintainer doesn’t behave with integrity then there is no reason for me to trust their data.
I use a number of criteria to evaluate reputation providers.

Read More