What is the Net Promoter Score?
The Net Promoter or Net
Promoter Score (NPS) is a management tool that can be used to gauge the loyalty
of a company’s customers. NPS was first introduced by
Fred Reichheld in
his 2003 Harvard Business Review article "One Number You Need to
Grow"
.
The NPS is based on a
single question:
"How likely are you to
recommend [product or company name]
to a friend or colleague?"
Customers are asked to
rate their answers on an 11 point scale ranging from (0 = not at all likely to
recommend and 10 = extremely likely to recommend).
These scores are then
split into three categories “Detractors,” “Passives,” and “Promoters.”
- Score of 0 – 6: Detractors
- Score of 7 – 8: Passives
- Score of 9-10: Promoters
The "Net" in
Net Promoter Score comes from subtracting the percentage of detractors from the
percentage of promoters. A negative score means you have more detractors than
promoters and a positive score means there are more promoters (that is, more
positive word of mouth than negative word of mouth). An NPS can be as low as −100 (everybody is a detractor) or as high as +100
(everybody is a promoter). An NPS that is positive (i.e., higher than zero) is
felt to be good, and an NPS of +50 is considered excellent.
Criticisms of NPS
A key benefit of NPS
is that it provides a single score that is easy to understand. However, in our
view it has only limited practical application for companies who want to
monitor, understand and improve their overall customer experience as it
measures only one aspect of the overall experience - loyalty. Below you will
find some of the most common criticisms of the NPS.
1.
NPS correlates with customer satisfaction but should
not replace it
Customer loyalty and customer satisfaction
are related concepts and they are correlated (people usually provide similar
ratings for satisfaction and loyalty). At the same time customer experience
questions and satisfaction questions
are also correlated. However, where a measure is correlated with another
measure, that does not mean that one is a replacement for the other, which NPS
attempts to do.
It's unlikely to have customers that are
loyal but unsatisfied, but you can have satisfied customers who aren't
loyal. However, a key concept behind NPS
and loyalty measures in general is that customer satisfaction and experience
were excluded to focus only on loyalty.
In Fred Reichheld’s, the creater of NPS, book “The
Loyalty Effect” he makes the case that 60% to 80% of customers who defected or
didn't repurchase a product were in fact satisfied or very satisfied. In the
Auto Industry, for example, he cited figures which showed a 90% satisfaction
rate and yet, on average, only 40% of customers repurchased the same brand of
car. This is an important issue as satisfaction does not mean loyalty (see our previous blog posting about satisfaction Do you really need satisfied customers.
So, while satisfaction relates to loyalty,
they are not the same thing. Satisfaction measures the customer’s
experience at any given time whilst NPS measures long term attachment
(loyalty.) It is therefore best to measure both and not rely solely on one. Having
multiple measures requires minimal additional effort to record and analyse and
is significantly more valuable and actionable for you to monitor and manage –
this is our approach.
2.
NPS is not actionable
NPS is also not
actionable. What if your company has a good score What does that mean? How can
you change the score? We simply don’t know the “So what?”. Similarly what if
you have a bad NPS? What do you do? Is it linked to a group of underperforming
or unprofitable gyms or a general company wide issue with the customer
experience?
If a metric is just an
“it is what it is” number, as NPS is, it has no context and no predictive power, can’t be used
alone, and doesn’t give you clues about what to do. Its usefulness must therefore
be questioned, especially if used in isolation.
At CJM Research we believe that you should measure
not only overall satisfaction and loyalty but measure Key Performance
Indicators of the customer experience too. This is then reported as a composite experience index score
made up of all the KPI scores. This approach has been reported as best practice
in other research
.
Combining the experience scores provides a
more robust and stable overall experience score that is directly linked to its
component KPI scores. This not only provides a truer overview of the customer
experience but also makes it easier to identify and act upon areas for
improvement and measure the impact of any changes or improvements to service.
For example, if you have a low overall
score for a part of your business you can directly link this to a particular KPI and
act to improve this specific aspect of service. With NPS you would know that a
part of your business has a lower score but not necessarily why. Our scoring is
designed to help management identify and act on the information to improve
service. This in turn makes our scoring easier to use for you as it is not only
a single experience score but one that is linked to the actual customer
experience and and this helps you improve overall satisfaction and loyalty for
customers in a faster and more targeted way.
Furthermore, by tracking specific aspects
of the customer experience it allows further detailed analysis that can tell
you which parts of the experience are most important in driving satisfaction
and loyalty. NPS on its own cannot do this.
3.
NPS only works in the context of your wider industry
The Net Promoter score
has no real meaning without the context of competitor scores. Whilst a
score of +50 is considered excellent this actually differs significantly between
industries. For example, a utility company with an NPS of +1 may be excellent
in the context of an industry where the average NPS could be -20.
We have an example of
this where one of our other clients had an NPS of +69 which they thought was an
excellent score…until they conducted some competitor customer research and
found they were only average compared to their main competitors.
This is because you
have to interpret NPS scores relative to other NPS scores in the same industry.
However, most NPS centred research does not include measures of competitors
leaving the results without context i.e. you do not know if your score is good,
average or poor compared to your competitors.
You also cannot
directly compare NPS across different industries or product categories because
some products simply don’t lend themselves to word-of-mouth (e.g., toilet paper
or car oil), while others can evoke passionate positive or negative views (e.g.,
retailers or restaurants).
4.
The same NPS score can come from very different customer
experiences
NPS is also problematic
because there are many possible ways to arrive at the same number, for
detractors, passive, and promoters respectively, you could achieve an NPS of
+30, for example, in dozens of ways.
A company with a +30
NPS could have 30% promoters, 70% passives, and 0%
detractors, while another company with a +30 NPS could 60% promoters,
10% passives, and 30% detractors. It is likely that a company with 30%
promoters and 0% percent detractors is very different than the one with a
polarized customer base with 60% promoters Vs 30% detractors — even though
their NPS is seemingly the same.
NPS also ignores the
fact that the voices and reach of promoters or detractors can be drastically
different depending on their motivation and the digital channels they have at
their disposal. There may be a small amount of very vocal detractors who go
online and write negative reviews. That would outweigh a large non-vocal group
of promoters, even if the company had a positive and high NPS.
5.
11 point scales are not as good as others
Research by Stanford
University and others.
found that the 11-point scale advocated by NPS has the lowest predictive
validity of the scales tested. We typically recommend a 10-point scale for
overall satisfaction and often mirror NPS scoring for recommendation (for consistency
with NPS rather than by preference). This provides a more detailed scoring
which is useful for tracking research.
For experience KPI’s
we suggest either a five or 10-point scale depending on our clients legacy
research and/or objectives. Five point scales are simpler to interpret and
easier for respondents to complete, especially on smart phones (50%+ of online
surveys are completed using phones) but smaller changes in average ratings are
more difficult to identify.
6.
The scoring inflates the margin of error
By converting an 11
point scale into essentially a 2 point scale made up of detractors and
promoters (NPS ignores passives in the scoring), information is lost. Furthermore,
this new categorisation increases the margin of error around the net score
(promoters minus detractors). Unfortunately, this means that if you want to
show an improvement in Net Promoter Scores over time, it can take a much larger
sample size to calculate, otherwise the difference won't be distinguishable
from the acceptable range of error.
7.
The relationship between NPS and growth remains unproven
One final, but important
criticism of NPS calls into question the strength and reliability of the link
between NPS and measures of business growth and profitability of those with
higher NPS. Research that has attempted to replicate the link between business
performance and NPS has not find statistically significant relationships between
the two.
Similarly, the relationship between growth and NPS is reported to be less
consistent or strong when using tracking data.
Even Fred Reicheld,
the inventor of NPS admits that the findings of his initial research for NPS
was flawed stating:
"A number of perspicacious
readers have noted that the statistical evidence provided in my book The
Ultimate Question is imperfect. It does not provide proof of a causal
connection between NPS and growth. Nor are some of the timeframes ideal.”
We are not suggesting the NPS is a useless score but that it is not the one score you should only measure. Instead it has its place together with satisfaction and more detailed experience questions in a balanced customer experience research programme.