If we can’t measure it, we can’t manage it, we’re told. But when it comes to customer experience, the bigger problem is mis-managing what we do measure, argues our guest blogger, Matt Watkinson.
I’m pretty sure we could structure an entire management training program around popular English proverbs.
Beauty is in the eye of the beholder — value is a matter of perception.
Don’t put all your eggs in one basket — diversify your portfolio.
Look before you leap — you’d better user test that thing.
Alas, management lore has its own proverbs which are not quite as appealing, perhaps the most popular being if you can’t measure it, you can’t manage it. A catchy phrase that would make an excellent tagline for a company that sells an opener specifically designer for cans of worms.
My inner pedant can’t help pointing out, for example, that we routinely manage things without measuring them because it would be impractical without creating ludicrous systems like the third-floor-staff-kitchenette-refrigerator malodorousness index, or the satisfying-car-door-closing thunkometer.
The bigger challenge however is that between measurement and management we have the thorny issue of interpretation, where there are two significant challenges at play.
First, we have to know whether the measurements themselves are accurate.
In the world of customer experience — or marketing more broadly — this is by no means a given. Gathering accurate data is extremely difficult, not least of all because as soon as a metric becomes used to assess performance, people start cheating.
Hence, we have Campbell’s Law, which states that as soon as a metric is used as a success indicator, its accuracy is compromised.
Harder still is knowing what the measurements mean, and if we aren’t aware of certain patterns and principles, we can very easily pursue the wrong goals. Here are two such examples.
A law-like principle in marketing, known as double jeopardy, shows that larger brands have more loyal customers. Or, rather, smaller brands have fewer customers (jeopardy one) that are also less loyal (jeopardy two). The reason is that bigger brands tend to be easier to recognize and buy, so the huge number of light buyers that constitute most of a brand’s customer base tend to buy them more often.
Imagine we look at our customer data for market share and loyalty and we see that both are lower than our much bigger competitor. We might conclude that low loyalty is the reason for our small market share and pour money into loyalty initiatives.
In reality however the opposite is true — we have low loyalty because we have small market share. This pattern even holds for Apple and Samsung smart phones, where contrary to popular belief, iPhone customers are no more loyal than Samsung’s given the size of their customer base.
The second example, related to the first, is that the size of our customer base also skews satisfaction scores because the more customers we have, the more their needs will diverge and the harder it will be to keep them all happy.
As Tim Keiningham explains in The Wallet Allocation Rule, “Market share is a strong negative predictor of future customer satisfaction. So for firms with high market share levels (or goals of attaining high levels), a focus on high satisfaction is not compatible…Customer satisfaction ratings can increase as a result of a decline in market share.”
If we’re not aware of this, we might look at our increasing satisfaction scores and pat ourselves on the back when our commercial performance is actually declining, or benchmark our satisfaction scores against a smaller rival and not realize that our satisfaction is lower because we’re crushing them.
Again, the risk is allocating resources to pursuing exactly the wrong strategy.
The message is a simple one. Customer experience practitioners, by and large, have devoted far more effort to gathering data than understanding how to interpret it. Redressing this balance is crucial if we’re to make evidenced-based decisions that help our businesses prosper.