10 Things to Know About Net Promoter Scores and the User Experience

Jeff Sauro, PhD

Increasingly companies are adopting the Net Promoter Score as the corporate metric.

In many companies, all metrics, including user experience metrics, should roll up to the Net Promoter Score.

Here are 10 things to know about the Net Promoter Score if you’re concerned about improving the user experience.

  1. The Net Promoter Score is a measure of customer loyalty and is based on a single question: How likely is it that you’ll recommend this product to a friend or colleague?  The response options range from 0 (Not at all likely) to 10 (Extremely likely). Responses are then bucketed into the following segments.Promoters:  Responses from 9-10
    Passives: Responses from 7-8
    Detractors: Responses from 0 to 6

    Subtracting the proportion of detractors from the proportion of promoters and converting it to a percent gets you the Net Promoter Score.  For example, 100 promoters and 30 passive and 80 detractors gets you a Net Promoter Score (NPS) of 9.5% (20 divided by 210). This mean there are 9.5% more promoters than detractors. A NPS of -10% means you have 10% more detractors than promoters.

    Our friends at Satmetrix want us to remind you that Net Promoter, NPS, and Net Promoter Score are trademarks of Satmetrix Systems, Inc., Bain & Company, and Fred Reichheld.

  2. The Net Promoter Score is appealing because of its simplicity (easy to score and a single question) and it’s expressed as a percentage which can be more digestible to executives and non-math types than interpreting a mean (e.g. 70% Net Promoters vs 7.9 out of 10). It can be confusing to have a negative percentage and some companies prefer to just call it a “score” and not percentage for this reason. Think of it like net income (which we all know can be negative). It’s no different than subtracting two dependent proportions like we explain in Chapter 5 of our book Quantifying the User Experience. 
  3. The main advantage of the Net Promoter Score is that it gets companies thinking about metrics that come from the customer. Yes, revenue is the ultimate metric but revenue is both a lagging indicator and not necessarily a good indicator of future growth–especially when you’re pissing off customers to get short term revenue (think of the latest fee from your phone company, cable company or rental-car company). What’s more, you can’t do anything about last quarter’s numbers. If you have a reasonable proxy for measuring future growth and revenue then you might be able to improve next year’s revenue. In the processes you also will likely make your customers happier and more loyal!
  4. The main disadvantage of the Net Promoter Score is that it reduces an 11 point scale into a 3 point scale (Detractors, Passives and Promoters). This has two major consequences. First it increases the sample size you need in order to achieve the same level of precision as using the mean. The margin of error is usually around twice as wide compared to using the more conventional services (mean and standard deviation). Second, it is harder to detect differences between scores, either over time or compared to a competitor.  For this reason I use the raw responses and use means and standard deviations in t-tests and regression analysis.
  5. Despite the popularity and enthusiasm for it being the “Ultimate” question, there might be better questions for your company or industry: Many measures of customer satisfaction and customer loyalty correlate. Reicheld in his 2006 book “The Ultimate Question” pg 28 points out that the likelihood to recommend question was the best or second best predictor of repeat purchases or referrals in 11 out of 14 industries (79%).  Likelihood to revisit, repurchase or reuse might be a better indicator of customer loyalty for your product or industry. I often saw this with business-to-business products I worked on. How likely is it that you’d recommend this non-profit accounting software to your friend? Despite the somewhat irrelevance of the question it still correlated highly with other questions and we were still able to focus on changes over time. So don’t throw the baby out with the bathwater.
  6. Don’t just collect NPS: The Net Promoter Score might be a good number to track but it’s usually the symptom of high or low customer loyalty, not the cause. People are or are not recommending the product, website or service because of something—you need to have a few good candidate questions in your short surveys so you can identify the root causes and improve.  Usually questions about value, quality, usability and a few key features will get you down the right track. You can then conduct a key-driver analysis to determine statistically which features or attitudes are having the biggest impact on Net Promoter Scores.  In one key-driver analysis I conducted for a client, I found the biggest driver of detractors was that emails were being sent too often to customers!
  7. Compare to Benchmarks: The NPS by itself might seem more intuitive than an average score because it is expressed as a percentage, but what makes good, average or poor scores varies a lot by industry (think cable companies versus luxury hotel chains).  For example, the average NPS for consumer software products is 21% compared with about a 6% for cable providers.
  8. Ask “why” for detractors:  If I could ask only one open-ended question on a survey it would be for detractors to briefly explain why they gave a 0-6 response.  You can usually categorize these responses pretty quickly into major groupings.   Often, many of the detractors will say things you can’t do much about like “I just don’t recommend products to friends” or “I really like the product” and there is almost always some quick fixes and patters in what you can fix.
  9. Ease of Use explains between 30% and 50% of users’ likelihood to recommend in software and websites. A large analysis of System Usability Scale (SUS) scores taken along with Net Promoter Scores found that a good chunk of why people recommend is based on their perception of the ease of use. Improving ease of use then should improve loyalty. How do you improve ease of use?  A quick usability test with just a handful of participants will often reveal the most obvious issues.
  10. Not all promoters are created equal. Just because a respondent gives a 9 or 10 on the likelihood to recommend question doesn’t mean they will actually recommend. To measure what I call promoter efficiency you ideally track customers over time to see if they actually recommended to a friend. As an alternative, ask respondents in the same survey if they actually have recommended to anyone in the last year and use that as a proxy for their future behavior.  I’ve included this figure in the NPS benchmark report and on average 68% of promoters report having recommended in the last year (ranging from 43% to 96%).

 

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top