Is Net Promoter Score overrated?

Find out why net promoter score (NPS) is not the best metric to help you improve customer experience (CX)

Add bookmark

Net promoter score is probably not the best metric to take action on in CX.

The Net Promoter Score (NPS) is probably the most overrated, overused and misunderstood number in customer experience (CX). And it has an absolute chokehold on the industry.

For a metric that is so synonymous with the CX industry, it is completely misaligned with the actual CX. Sometimes it can feel like pointing out the emperor’s new clothes! Everyone is praising it but in actuality, it’s telling us absolutely nothing.

What is Net Promoter Score?

NPS is the most commonly used metric among organizations looking to measure CX on a scale of 0 to 10. It’s a simple question with a simple scale. It’s executive friendly and everyone uses it. It has become the “everything” number supposedly explaining CX.

But is that really what it does? Is it doing its job? Is it right for your organization?

Unlikely. In fact, it’s more likely that you’re using it completely incorrectly. That might be why you haven’t been able to move the dial on customer growth since implementing it.

But where did it come from? Why is everyone using it? How did we get here?

All the way back in 2003, Bain & Company’s Fred Reichheld introduced Net Promoter Score with a bold claim: it was “the one number you need to grow.” (Marketing genius!) It caught fire instantly.

Executives loved it because it was simple; consultants loved it because executives loved it. It seemed like we finally had the commercial metric that measured CX as a financial number. Suddenly every boardroom wanted to know their NPS.

The problem is popularity does not equal accuracy. It became a corporate comfort blanket, easy to explain in a slide deck, and words like “promoter” sounded like dollar signs. But it was never designed to capture the complexity of real experiences, and we are paying for it now as organizations and as customers.

Don't miss any news, updates or insider tips from CX Network by getting them delivered to your inbox. Sign up to our newsletter and join our community of experts. 

What about the customer? 

Is it possible to attribute the general decline in the everyday CX to the prevalence of NPS? We are using a question and scale that were designed to measure brand loyalty instead of the everyday CX and that’s where we’ve gone wrong.

For starters, the very name "Net Promoter Score" omits the crucial element it aims to measure: The customer. This metric mechanistically reduces customers to mere classifications, with no mention of the word "customer" anywhere within its framework. While this might seem insignificant, effectively placing the customer at the core of CX necessitates keeping them central to the focus, not removing them from it.

The question

Semantics aside, let’s look at the flawed question: “How likely are you to recommend brand/product/service X to a friend or family member?” Never mind the countless internet memes making fun of this survey question:

  • “I can’t tell you how unlikely I am to talk about printer toner cartridges to anyone.”
  • “This is not a conversation I would ever have.”
  • “0, why do you think anyone is talking about this to each other?”

This question aims to identify brand ambassadors, focusing on how many of your customers would hypothetically advocate for your brand. While recommendations are crucial for growth, they are only one indicator of CX. These responses offer little insight into the actual CX and instead gauge potential advocacy at its core.

And that’s the problem. It’s not about the actual experience your customer just had with you. It’s not asking, “Was the website easy to navigate?” or “Did the delivery arrive on time in good condition?” or “Did you find what you came for?” Instead, it’s jumping straight to “Would you recommend us?” which is like skipping to dessert before you’ve served the main meal.

Customers also know what’s going on. They’ve seen the NPS question a million times, and they know it’s not really about them. It’s about the business. It’s performative. They’re being asked: “Would you market us for free?” That’s not a measure of satisfaction; it’s a vanity metric. If you’re serious about improving CX with actionable insights, you need to dig into the why behind customers' feelings, not rely on whether they’d theoretically talk about you at a barbecue. That’s what you need to be asking after you’ve asked about their actual experience.

The scoring system

In addition to the flawed question, the scoring system is also problematic. That scale of zero to 10, grouping your customers into three overly simple and incorrect categories: detractors, passives and promoters.

Here’s how it works:

  • A score of zero to six is considered a "detractor" and equals a score of -100.
  • A score of seven or eight is considered a "passive" and equals a score of zero.
  • A score of nine or 10 is considered a "promoter" and equals a score of +100.

From there, you almost always end up Googling the calculation (Surely I’m not the only one!) Subtract the percentage of detractors from the percentage of promoters. Passives are not considered in the equation. They mean NOTHING. We’re getting into silly math territory that makes no sense. Why are we ignoring the experience of an entire group of customers? Did their kind-of-ok experience not matter?

The numbers-to-feelings ratio

The calculation is the second wildest part. The logic applied to the groupings is really puzzling. True promoters, people who are brand loyal, love your business, go out of their way to recommend you and be a brand ambassador, will not answer anything else to that question except a 10.

That’s it. If you want a true brand ambassador, a real promoter, a 10 is the only thing they will give you. A nine is not on their radar.

Since most customers are not answering the question as a brand ambassador though, let’s look at someone who had a generally good experience. Things went smoothly, no hiccups or friction, they got what they needed and the experience was overall fairly pleasant. Maybe service was above and beyond their expectations. Any combination of these experiences could lead someone to choose and eight, nine or 10. An 8 would probably equate to “pretty good” to the average customer.

In some cultures, people never give nines or tens. Countries like Japan, Germany and France are naturally low scorers. They may be satisfied with their experience, but they will not mark you high on the scale, dragging your rating down.

On the other end of the scale, a six is considered a negative score, “a detractor.” Ask any person, on a scale of zero to 10, a five and a six is never considered bad, it’s considered “fine,” “in the middle,” “mediocre,” “neither good nor bad.” The person who answers a five or six is not actively demoting you at a barbecue.

When someone is a REAL detractor, similar to the 10 concept, they will never give you anything but a zero or one. When they are mad enough to be talking poorly about you, a zero is a guarantee.

The scale itself is arbitrary and incorrect. The low end of the scale, in particular, does not allow for nuance in experience. There are truly neutral numbers that are treated as a negative. Numbers like 2, 3 and 4 would be considered a poor experience but in reality are not bad enough to be given an angry red zero. It’s unlikely they’ll be recommending you anytime soon, but it’s also entirely possible they’ll return, albeit with much lower expectations.

So far, NPS is:

  • Forgetting to put the customer at the center of the metric.
  • Skipping a step and asking the wrong question.
  • Classifying customers’ scores incorrectly.
  • Excluding a large customer segment because of a wacky calculation.

The kicker

NPS provides no insight into the experience or reasons behind customer sentiment, even after you've diligently gathered all those zero to 10 scores. A five-point drop in your score won't clarify whether customers are unhappy with something in the experience (pricing, persistent website crashes, delayed deliveries) or if they’re simply cooling on your brand and losing interest in you. You're left with a strange number to a question that didn’t ask about the highlights and might mean different things to different people

The way different people interpret the question and the score varies and when the customer answers it one way, and businesses interpret it a different way, something is getting lost in translation.

That’s why so many businesses end up “chasing the score” instead of fixing the experience. They celebrate a bump without knowing what drove it or panic at a drop without a clear action plan. NPS is a temperature check when the patient has a broken arm. Yet, we still ask, “But would they promote us?”

When you boil a whole experience down to a single digit and the wrong question, you flatten out the nuance that actually matters to customers. Real CX lives in asking the right question: What went wrong, what went right, and where can you act? NPS skips all of that and leaves you with a vanity metric that feels like insight but rarely drives meaningful action.

What is it good for? 

So is NPS good for anything? Is it ever appropriate? Despite my scathing review, yes! Albeit with a big adjustment on the scoring.

Notably, everyone uses NPS, so it is an obvious choice for benchmarking against competitors. What do your customers think about you comparatively? In theory, asking customers the same question using the same scale and grading would give an accurate representation of brand loyalty. If you can’t beat ‘em, join ‘em - and figure out where you stand against them.

It is also a useful metric for startups and new brands. Arguably using the metric for what it was made for, understanding how likely your customers are to promote you to others is helpful for new brands that need to improve that number to grow and promote themselves. Understanding if your customers would promote you to their friends and family is a useful tool for judging if the product or service you have is worth the effort.

Finally, it is useful as a loyalty indicator. Are your customers loyal? Would they recommend you? The measure of intent to recommend can loosely correlate with growth, just note it doesn’t measure satisfaction or CX quality.

Ultimately, NPS is an ineffective measure of CX. It doesn’t ask customers the right question, the scoring is wrong and the calculation is skewed. In addition, it fundamentally removes the focus of the measurement from its name - customer.

NPS probably isn’t going anywhere anytime soon. By now it’s deeply entrenched in our dashboards, KPIs and board reports. However, we can use it wisely by applying it to the right brands or products. For example, it might work for a car brand but not a funeral home. You can introduce more comprehensive metrics to CX (more on those another day) and build a balanced perspective of what NPS is really measuring.

It may have been “the one number you needed to grow” in 2003, but looking into 2026, the real challenge will be turning feedback into action. What’s the best question you should be asking your customers?

Quick links: 


RECOMMENDED