I know. The title of my article this month sounds very much academic. Don’t worry; it isn’t. Not that I do not have that kind of ambition. I have always been attracted by academia, but I most certainly do not pretend having the essential qualities for that life. No. Regardless of how “highbrowish” my title sounds, I will be asking very simple questions.
Let us start with one little question we rarely (if ever) ask in the Digital Analytics circles: what do we know about customer behavior? I mean, what have we really learned after 15 years of analyzing what people do on Web sites? Obviously, human behavior is always pretty hard to predict, and this is no mystery that Digital Analytics is not a science. There is no body of knowledge, no treaty full of theorems and principles, no immutable laws from which we could deduct causality in what we observe, so that we could make accurate predictions.
It is pretty much left to each analyst to build his/her knowledge from observations, experiments, validated (or invalidated) hypotheses, etc. Somehow, this knowledge should offer some predictive capabilities, I mean simple statements such as “If we do X, we should get Y”. I will come back to that. So, if we took 20 people who have been analyzing online projects, what would be their common knowledge of what works, and what doesn’t in Online Marketing? Frankly, I am not even sure if the question is interesting beyond one’s own situation, with one’s own product set, and one’s own specific market.
What is the part of randomness in what we observe in Google Analytics, or Webtrends, or SAS? How specific to that period, that campaign, that group of people who happen to be on the site, are the results we get, and how well are they predictive, heck merely descriptive, of what we will see next? True, this is the nature of the object of our observations which responds to what we do, you know, what we used to call interactivity. There can be an infinite number of reasons why indicators go up or down, as Gary Angel underlined in this newsletter. I once read in an agency’s blog that they had identified over one thousand factors that can influence conversion. To me, it is like saying nothing does, since it becomes almost impossible to control for each of those factors, let alone in combinations.
It is my understanding, from years of observations of the digital analytics scene that a lot of what is said about what works, and what doesn’t, comes from opinions that freely circulate, often lacking reference to specific research or experiment results. I often come across accepted ideas that don’t ever seem to be contradicted (I touched that question here); even though nobody can remember who first came up with them, or what particular results lead them to draw those conclusions.
I am afraid that such set of ideas too often present the appearance of ideology. There are many definitions of it, so allow me to offer one which as good as any I believe:
“Ideology provides a simplified model of the world that reflects our values, biases, and experiences. It helps people make decisions in the face of imperfect knowledge.”(1)
One certainly wishes analytics would be the anti-thesis of such simplified models; it is not supposed to have an agenda. However, in many situations, analytics is used for that exact purpose. But analytics practicians too can create their own biases, their own ideology dare I say, by not rushing to judgement.
Perfecting knowledge is a sure path to fighting all those blinding half-truths out there. I am not saying that there are some Immutable Laws of Customer Behaviors in Nature, waiting to be brought to light, and applied for all eternity. If there was one universal way to market products to people, it would have been long discovered, and everybody would be rich. However, I believe much more could be put in common, and shared so that we could all start to see more commonalities and patterns which would help us increase our capacity to beat the coin.
At the end of the day, shouldn’t analytics be able to be right at least 80% of the time? 70%? 60%? You know, below 50%, better flip a coin to make your decisions. I recently had lunch with a manager who complained that his analysts could never make straightforward recommendations, always nuancing, never able to choose a side. This is basically tantamount to not knowing anything! In many circumstances, being right 60% of the time is enough (providing being wrong is not catastrophic obviously).
Even though I do not advocate that we seek perfect knowledge, I do think we need to aim at it while asking the question of how we know what we know. And I don’t think the answer resides in more perfect tools.
(1) J. G. KOOMEY, Turning Numbers into Knowledge, 2008, p.31