Technology does not trump policy when it comes to delivery

Recently Ken Magill wrote an article looking at how an ESP was attempting to sell him services based on the ESPs ‘high deliverability rates.’ I commented that Ken was right, and I still think he is.
Ken has a followup article today. In the first part he thanks Matt Blumberg from Return Path for posting a thoughtful blog post on the piece. Matt did have a very thoughtful article, pointing out that the vast majority of things affecting delivery are under the control of the list owner, not under the control of the ESP. As they are both right, I clearly agree with them. I’ve also posted about reputation and delivery regularly.

While some of us agree wholeheartedly with Ken, he did receive comments from a few delivery people indicating they thought that ESPs should talk about how their technology could improve delivery for senders. Having had experience with customers of most (if not all) of the major ESPs, I would argue that most of the ESPs have roughly equivalent technology. Some may have slightly different bells and whistles, but those bells and whistles are not going to improve delivery on their own.
One commenter says, “ESP technology completely varies, and as ISPs increase ‘throttling’, the ESPs that can optimize throughput will have dramatically better deliverability than others.” What’s wrong with this statement? Nothing is wrong on the surface, it makes sense if you don’t know much about delivery and ISP rate limiting. However, in the last 12 – 18 months ISPs have really moved from one rate limit for all senders to dynamic limits based on the reputation and type of mail coming from a particular source.  Throttling at the major ISPs is mostly controlled by  reputation – they are dynamically assigning rate limits based on a senders’ short term and medium term reputation. If your ESP has to implement technology in order to cope with those limits on your behalf then your delivery through that ESP, by definition, has a problem.
Moving to an ESP that can dynamically “adjust” to ISP imposed limits may improve delivery over the short term, but will not do anything to fix the underlying reputation issues that are prompting the ISPs to throttle mail.
Another commenter says, “Some ESPs have better support structure in place than others, whether it’s technology, staff, or approach, to make marketers more successful.” I agree with some of this. Some ESPs do have better techology and staff and will hold marketers hands and help them improve delivery. In most cases, this revolves around actually making the marketers into better senders, teaching them about best practices and even forcing the sender to make changes or find another ESP. Rarely does the actual SMTP technology factor into this improvement.
There are a lot of technical things that ESPs could do to improve delivery, but that many (most?) of them don’t do. Two of the more obvious things ESPs could do technically to facilitate delivery improvements are:

  1. Send VALID and w3c compliant HTML mail. This is pretty easy to do with off the shelf or open source technology, but most ESPs don’t do any cleanup of the email format. Invalid HTML will hurt delivery. HTML from a MS Word document pasted into an email creates ugly, uncompliant, messy HTML that looks a whole lot like spam to ISP filters.
  2. Use data mining techniques to identify potential problem customers before mail is sent. I know one ESP is doing this very successfully, but most ESPs deal with problems reactively instead of proactively. It is better for everyone concerned if bad mail is caught before it goes out, not after.

Overall, I am a big supporter of ESPs. I think their technology and their policy expertise makes them a good vendor for the average company wanting to use email marketing. I think, though, that delivery and deliverability are under the control of the sender, not the ESP. An ESP that attempts to sell the idea that the technology is more important than practices and policies is misleading both themselves and their customers.

Related Posts

Delivery Metrics

Last week ReturnPath published a study that shows 20% of permission based email fails to be delivered to the inbox. For this study, ReturnPath looked at the mail sent by their mailbox monitor customers and counted the number of deliveries to the inbox, the number of deliveries to the bulk folder and the number of emails that were not delivered.
At US ISPs 21% of the permission based emails sent to the ReturnPath probe network did not make it to the inbox. 3% of the emails sent went to the bulk folder and 17% did not make it to the mailbox at all.  MSN/Hotmail and Gmail were the worst ISPs to get mail to. They each failed to deliver more than 20% of the mail that was sent to them. At Canadian ISPs, even less of the mail made it to the inbox, primarily because primus.ca is such a large portion of the Canadian market and they use Postini as a filter. Postini is a quite aggressive filter and takes no feedback from senders.
ReturnPath’s take home message on the survey is that one set of metrics is not enough to effectively evaluate a marketing program. Senders need to know more about their mailings than they can discover from just the bounce rate or the revenue rate or response rate or open rate.
There are a lot of reasons an email doesn’t get to the recipient’s inbox or bulk folder. Mail can be hard blocked at the MTA, and rejected by the ISP outright. Mail can be soft blocked at the MTA and the ISP can slow down sending. Sometimes this is enough to cause the sending MTA to stop attempting to deliver the mail, thus causing mail to not show up. Both of these types of blocks are usually visible when looking at the bounce rate.
Some ISPs accept mail but then fail to deliver it to the recipient. Everything on the sender end says the ISP accepted it for delivery but the ISP just drops it on the floor. This is the type of block that a mailbox monitoring program is best able to identify.
Despite all the discussions of numbers, many marketers are still not measuring the variables in their email campaigns. Ken Magill wrote today about a study released by eROI that indicates more than a third of marketers are not doing any testing on their mailings.
Now, both of these studies are done in an attempt to sell products, however, the numbers discussed should be making smart senders think about what they are measuring in regards to their email campaign, how they are measuring those factors and what the measurements mean.

Read More

Hidden cost of email blasts

Seth Godin has a post up today talking about how friction, that is the cost of sending marketing, is good for marketing. With more friction, marketers make choices about sending instead of sending to everyone.
The post touches on a point I’ve certainly tried to explain to clients and senders in general.

Read More

Compliance vs. Deliverability

Most people I know handling delivery issues for senders have some version of delivery or deliverability in their job title. But as I talk to them about what they do on a daily basis, their role is as much policy enforcement and compliance as it is delivery. Sure, what they’re telling customers and clients is how to improve delivery, but that is often in the context of making customers comply with relevant terms and conditions.
Some delivery folks also work the abuse desk, handling complaints and FBLs and actually putting blocks on customer sends.
I think the compliance part of the delivery job description that is often overlooked and severely downplayed. No one likes to be the bad guy. None of us like handling the angry customer on the phone who has had their vital email marketing program shut down by their vendor. None of us like the internal political battles to convince management to adopt stricter customer policies. All of these things, however, are vital to delivery.
Despite the lack of emphasis on compliance and enforcement they are a vital and critical part of the deliverabilty equation.

Read More