Testing and data driven decisions

There’s a lot of my education in the sciences that focused on how to get a statistically accurate sample. There’s a lot of math involved to pick the right sample size. Then there’s an equal amount of math involved to figure out the right statistical tests to analyse the data. One of the lessons of grad school was: the university has statistics experts, use them when designing studies.

Outline of a head with a gear inside it.

Even in science not everything we test has to be statistically accurate. Sometimes we just want to get an idea if there is something here. Is there a difference between doing X and doing Y? Let’s do a couple pilot tests and see what happens. Is this a line of inquiry worth pursuing?

Much of my statistical knowledge comes from practice, not theory. Most of my advanced classes did have some stats, but I never actually took a statistics class. That leaves me in a strange position when listening to people talking about the testing they do. I know enough statistics to question whether their results are valid and meaningful. But I don’t know enough theory to actually dig down into the numbers and explain why.

In marketing, we do a lot of testing. We use the results of this testing to drive decisions. We call this data driven marketing. I know a lot of marketing departments and agencies do have statisticians and data scientists on hand.

I am sure, though, that some tests are poorly designed and incorrectly analysed. This bad data leads to poor decision making that leads to inconsistent or unexpected results. The biggest problem is, people who fail to go back and question if the data used to make the decision means what they think it does.

Email, and particularly filters, have a lot of non-repeatable elements. Gmail filters, for instance, adapt constantly. Without carefully constructed, controlled and repeated tests we’re never going to be able to tease out the specifics. The even bigger challenge is that the process of testing will, in and of itself, change the results. Run the same series of tests over and over again and the filters may adapt and act differently for test 11 than test 2.

Another piece that leads to poor decision making is thinking our preferences are representative of our audience. Even unconsciously, many of us design marketing programs that fit the way we like to be marketed to. In order to make good decisions, we need to question our own biases and think about what our audience wants.

Finally, there is a lot of value in looking at how people behave. One thing I’ve heard a lot from marketers over the years is that what people say they want is different from how they actually act.

Overall, to make good marketing decisions we can’t just collect random bits of data and use it to justify what we wanted to do anyway. The data always reflects the question we asked, but not always the question we wanted the answer to. Blindly using data, without thinking about our own biases, leads to poor outcomes.

Related Posts

Data Driven Email (and other) Marketing

The frequency of emails from the Obama campaign ended up being a talking point for pundits and late night talk show hosts. Jon Stewart of The Daily show even asked President Obama about email directly during his October 18th interview. (Video, email question at the 5:56 mark)

Read More

Schroedinger’s email

The riskiest email to send is that very first email. It’s a blank slate. Even if you’re sending confirmation messages, you don’t really know anything about how this email is going to affect your reputation.

Read More

April 2017: The Month in Email

April was a big travel month for us. I went to Las Vegas for meetings around the Email Innovations Summit and to New Orleans, where Steve spoke on the closing keynote panel for the EEC conference.
I wrote several posts this month about privacy and tracking, both in email and in other online contexts. It’s increasingly a fact of life that our behaviors are tracked, and I wrote about the need for transparency between companies and those they are tracking. More specifically, I talked about the tradeoffs between convenience and security, and how people may not be aware that they are making these tradeoffs when they use popular mailbox tools like unroll.me. The folks over at ReturnPath added a comment on that post about how they handle privacy issues with their mailbox tools.
Steve contributed several posts this month. First up, a due diligence story about how service providers might look more closely at potential customers for their messaging platforms to help curtail spam and other fraudulent activity. He also looked at the history of “/8” IP blocks, and what is happening to them as the internet moves to IPv6. Steve also added a note about his new DMARC Validation tool, which rounds out a suite of free tools we’ve made available on our site. And finally, he showcased a particularly great email subscription experience from Tor.com — have a look!
I highlighted another post about companies doing things right, this one by Len Shneyder over at Marketingland. In other best practices news, I talked about bounce handling again (I mentioned it last month too), and how complicated it can be. Other things that are complicated: responding to abuse complaints. Do you respond? Why or why not?
Our friends at Sendgrid wrote a great post on defining what spammers and other malicious actors do via email, which I think is a must-read for email marketers looking to steer clear of such activity. Speaking of malicious actors, I wrote two posts on the arrest of one of the world’s top email criminals, Peter Levashov, and speculation that he was involved in the Russian hacking activity around the US elections. We’re looking forward to learning more about that story as it unfolds.

Read More