Marketing decisions will always depend on some level of Gut Feel, an understanding of the audiences and the behaviors specific segments of the audiences tend to express. It follows any useful, actionable analysis of marketing data must involve a degree of Gut Feel. So I don’t quite understand any argument pitting these two ideas against each other, do you?
Perhaps we should start with a definition of Gut Feel to make sure we’re all on the same page:
Gut Feel – past problem-solving experience, including previous outcomes or patterns, giving rise to a “feeling” or prediction about the correct action to take on a current problem.
In other words, Gut Feel is not a guess, it’s a projection based on history and experience. Agreed?
Seems to me the more important question in this debate is this: to what degree should Gut Feel be applied, given any particular situation? What are the important dimensions and variables?
Dimensions: I’d say Confidence in the Measurement, and Nature of the Gut in question. Confidence in Measurement probably covers areas like:
1. Data quality – how error prone is the data collection?
2. Nature of results – are we looking at “counts” or results of a controlled test?
3. Structure of inquiry – was the measurement or test set up properly, is it un-biased?
How should a Marketer apply Gut Feel to these issues? Become a bit more familiar with data collection and testing methodology, so you can challenge the data and any conclusions made. Ask your analyst to explain the ideas above to you at a fundamental level. If your analyst cannot explain these basic sources of analytical error or you suspect a problem, apply more Gut Feel to the results.
Here’s an example: In much of web analytics, the data collected is accurate, but can suffer from information “dropped” or not collected. For this reason, it’s a good idea to question a conclusion that doesn’t really move the needle. For example, if Campaign 1 beats Campaign 2 by only 1%, Gut Feel tells me it’s possible the “beat” could be false, the result of a specific situation. I want to see at least a 5% – 10% beat to be really confident there is a difference between Campaign 1 and Campaign 2 – the bigger the beat, the more confident I am.
For analysts, how do you handle Gut Feel? Perhaps these factors matter most, for the Gut-er in question:
1. Years of experience in subject area – how many times have situations similar to the current one been encountered, tested, resolved?
2. Quality of Experience – I think we have all experienced situations where two people with “5 years experience” can have dramatically different opinions. For example, in Online Marketing, a person with 5 years of experience with Display Ads versus 5 years experience in Pay per Click. If you are trying to solve a PPC problem, the PPC Gut is the one you want to listen to, all else equal.
How should an analyst react to Gut Feel pressures? Don’t be offended or resistant to a Gut Feel inquiry; it’s part of the process and essential to building credibility. Lay out the facts. If “the Gut” is intractable, the best solution is simply to repeat the experiment, perhaps with slight changes to appease the Gut. If the experiment is repeated 3x and consistently provides the same answer, guess what? That’s confidence, which is all the Gut should need.
Now that we understand the nature of the Measurement and Gut Feel, we decide how relevant Gut Feel is and how much weight it will receive together with Measurement. The question is really this: what mix is applied, what is the weighting of Measurement versus Gut Feel for a given decision?
Generally, these factors come into play:
1. Decision risk – how much does the correct answer really matter? Is the problem isolated, or is it interconnected? Could there be cascading negative effects throughout the company if the wrong decision is made?
2. Financial risk – how much money is at stake here? 1% of budget? 40% of budget? Is the decision really worth the resources to reach 99% certainty?
3. Scale risk – is it possible the correct decision is different given volume changes? What works as a small test may not work across the enterprise.
Then one chooses an intelligent balance of Gut Feel and Measurement, based on Confidence in the Measurement, Nature of the Gut, and Risk of being correct or not. Hopefully, a joint decision between the Analyst and Marketer is made.
I have seen both ends of this decision spectrum; cases where proper Measurement consumed an incredible amount of resources, just to prove that Gut Feel was incorrect on a very small problem. In the end, Measurement was correct but the opportunity cost created by not solving other significant problems in a more timely way was huge. Cost of “real proof” exceeded value by 10X.
On the other hand, I’ve seen cases where highly sophisticated Measurement techniques were rightly used to solve very large, extremely risky problems. The Measurement answer did not match Gut Feel, and ultimately Gut Feel was proven to be correct – there was a fault in the test set-up and analysis. This takes an enormous Gut to pull off, but it happened, and averted a catastrophe.
The end game is this: Increased confidence the outcome of the decision will be “as expected”. Both Measurement and Gut Feel play a role.
Gut Feel is simply a probability that the present situation is like others encountered before, and should be treated as such. If the Measurement side produces results contrary to Gut Feel that are highly unlikely to be faulty, the Gut is probably wrong in this particular circumstance – not wrong all the time.
If there is legitimate doubt on the Gut Feel side, Measurement should design tests specifically to remove the doubt. If the tests do not produce strong evidence Gut Feel is correct, then at the very least, Gut Feel should be less confident.
When more testing is not possible or inconclusive, or does not generate a significant “beat”, then there’s really no choice left, is there? If Measurement has done the best they can do, Gut Feel rules the day.
So, next time you want to have an argument about the respective values of Gut Feel and Measurement, how about putting some context around the players and decision to be made? The best decisions are not where one wins over the other, but where they meet in a reasonable way to build confidence in the decision.