Usability, Customer Experience & Statistics

10 Benchmarks for User Experience Metrics

Jeff Sauro • October 16, 2012

Quantifying the user experience is the first step to making measured improvements.

One of the first questions with any metric is "what's a good score?". Like in sports, a good score depends on the metric and context.

Here are 10 benchmarks with some context to help make your metrics more manageable.

1.  Average Task Completion Rate is 78%: The fundamental usability metric is task completion. If users cannot complete what they came to do in a website or software, then not much else matters. While a "good" completion rate always depends on context, we've found that in over 1,100 tasks the average task completion rate is a 78%.

2.  Consumer Software Average Net Promoter Score (NPS) is 21%: The Net Promoter Score has become the default metric for many companies for measuring word-of-mouth (positive and negative). In examining 1,000 users across several popular consumer software products, we found the average NPS was 21%.

3.  Website Average Net Promoter Score is -14%: We also maintain a large database of Net Promoter Scores for websites. The negative net promoter score shows that there are more detractors than promoters. This suggests that users are less loyal to websites and, therefore, less likely to recommend them. It could be that the bulk of users on any one website are new and are therefore less inclined to recommend things they are unfamiliar with.

4.  Average System Usability Scale (SUS) Score is 68: SUS is the most popular questionnaire for measuring the perception of usability. Its 10 items have been administered thousands of times. SUS scores range from 0 to 100. Across the 500 datasets we examined the average score was a 68. The table below shows the percentile ranks for a range of scores, how to associate a letter grade to the SUS score, and the typical completion rates we see (also see #5).

GradeSUSPercentile RankEst. Comp Rate
A 8293%100
B 7573%86
C 6850%67
D 5520%30
F 5013%16
F 448%0
Table 1: Raw System Usability Scale (SUS) scores, associated percentile ranks, completion rates and letter grades. Adapted from A Practical Guide to SUS and updated by Jim Lewis 2012.

5.  High task completion is associated with SUS scores above 80: While task completion is the fundamental metric, just because you have high or perfect task completion doesn't mean you have perfect usability. The table of SUS scores above shows that across the 122 studies, we see average task completion rates of 100% can be associated with good SUS Scores (80) or great SUS scores (90+). Associating completion rates with SUS scores is another way of making them more meaningful to stakeholders who are less familiar with the questionnaire.

6.  Average Task Difficulty using the Single Ease Question (SEQ) is 4.8: The SEQ is a single question that has users rate how difficult they found a task on a 7-point scale where 1 = very difficult and 7 = very easy. Across 200 tasks we've found the average task-difficulty is a 4.8, higher than the nominal midpoint of 4, but consistent with other 7-point scales.

7.  Average Single Usability Metric (SUM) score is 65%: The SUM is the average of task metrics—completion rates, task-times and task-difficulty ratings. As such, it is impacted by completion rates which are context-dependent (see #1 above) and task times which fluctuate based on the complexity of the task. Despite the context-sensitive nature, I've seen that across 100 tasks of websites and consumer software that the average SUM score is 65%. This is for 3-metric SUM scores. It will be higher for 4-metric scores, which include errors. However, most of the datasets I have used are only 3-metric SUM scores. The table below shows a table of SUM scores and the percentile ranking from the 100 tasks.  For example, getting a SUM score for a task above 87% puts the task in the 95th percentile.

SUM %Percentile Rank
Table 2: SUM Percent Scores from 100 website and consumer software tasks and percentile ranks. For example, a SUM % score (from averaging completion rates, task-time and task-difficulty) of a 55 was at the 25th percentile--meaning it was worse than 75% of all tasks.
8.  The average SUPR-Q score is 50%: The Standardized Universal Percentile Rank Questionnaire (SUPR-Q) is comprised of 13 items and is backed by a rolling database of 200 websites. It measures perceptions of usability, credibility and trust, loyalty, and appearance. A score of 50% means half the websites score higher and half score lower than your site's SUS score.

9.  Usability problems in business software impact about 37% of users: In examining both published and private datasets, we found that the average problem occurrence in things like enterprise accounting and HR software programs impact more than one out of three (1/3) users. While that's bad for a usable experience, it means a small sample size of five users will uncover most usability issues that occur this frequently.

10.  The average number of errors per task is 0.7: Across 719 tasks of mostly consumer and business software, we found that by counting the number of slips and mistakes about two out of every three users (2/3) had an error. Only 10% of all tasks we've observed are error-free or, in other words, to err is human.

About Jeff Sauro

Jeff Sauro is the founding principal of MeasuringU, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 5 books on statistics and the user-experience.
More about Jeff...

Learn More

You Might Also Be Interested In:

Related Topics

SUM, Completion Rate, Net Promoter Score, SUS

Posted Comments

There are 3 Comments

November 12, 2015 | Nick wrote:

Where did you get the data on these benchmarks, specifically the SUM score average of 65%? 

April 17, 2015 | Mark Richman wrote:

Thanks, Jeff! Very helpful 

October 24, 2012 | Ted wrote:

Great post Jeff. Do you offer access to the NPS benchmarks you mentioned in point #3 by industry? 

Post a Comment


Your Name:

Your Email Address:


To prevent comment spam, please answer the following :
What is 4 + 5: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[6634 Subscribers]

Connect With Us

Our Supporters

Loop11 Online Usabilty Testing

Use Card Sorting to improve your IA

Userzoom: Unmoderated Usability Testing, Tools and Analysis


Jeff's Books

Customer Analytics for DummiesCustomer Analytics for Dummies

A guidebook for measuring the customer experience

Buy on Amazon

Quantifying the User Experience 2nd Ed.: Practical Statistics for User ResearchQuantifying the User Experience 2nd Ed.: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to the 2nd Ed. of Quantifying the User ExperienceExcel & R Companion to the 2nd Ed. of Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download