Usability, Customer Experience & Statistics

5 Examples of Quantifying Qualitative Data

Jeff Sauro • January 24, 2012

There is an erroneous perception in the UX community that if your method is qualitative, then numbers somehow cannot or should not be used.

These perceptions come from an informal practice that stems back to the beginning of the usability profession and continues through training programs and some UX experts.

Unfortunately, this perception is misguided and can prevent perfectly good data from being used to gain accurate views of the user experience.

Qualitative data can in fact be converted into quantitative measures even if it doesn't come from an experiment or from a large sample size.

The distinction between a qualitative study and quantitative study is a false dichotomy. It doesn't cost more money to quantify or use statistics. It just takes some training and confidence--like any method or skill.

Here are five examples of how you can take common qualitative approaches to assessing the user experience and convert them into numbers which can then be treated with a range of statistical procedures.

  1. Converting a usability problem into a frequency: The quintessential usability activity is watching users attempt realistic tasks and identifying what in the interface is causing problems.

    Simply categorize the problems, count the frequency, then use confidence intervals to estimate how common the problems likely are in the entire user population. For example, if 3 out 11 users had a problem downloading the correct software product from a website, then we can be 95% confident at least 9% of all users would also have the problem (use the free web calculator or download the problem frequency calculator).  It doesn't cost more money to generate those confidence intervals. This process also allows you to generate more accurate sample size estimates.

  2. What problems are customers having?: It is sometimes difficult for customers to identify what they need in a product and where the shortfalls are. One effective approach is ethnographic research (a qualitative method), observing customers in their own setting encountering and solving problems.

    Observe the problems customers encounter, categorize and count them. Then estimate the percent of all customers that likely share this behavior or problem to help prioritize product features. You can then estimate how many customers you need to visit based on the frequency of these issues.

  3. Why is the product not being recommended? When using the Net Promoter Score, it's valuable to ask open-ended, follow-up questions, especially for Detractors, such as, "Briefly describe why you gave the rating." Take the list of open-ended comments and group them into categories (content analysis). Count the occurrences, create a percentage of all comments, graph them and throw in some confidence intervals for good measure.

  4. Why was that task so difficult?: I recommend asking just a single question after users attempt a task in an informal Steve Krug usability test. If a user provides a low rating (below a 5), ask them to briefly explain why they gave a low rating. Take these open-ended comments, categorize them and add up the frequency in each group. This process can help you and your stakeholders make more informed decisions about the likely causes of the trouble. Figure 1 below shows an example of the comments from a recent usability test.

    Figure 1:
    Categorized comments about why the task was difficult. Error bars are 90% confidence intervals, N= 106.

  5. Combining Net Promoter Scores and comments: A powerful way of making qualitative, open-ended comments more actionable is to combine them with a closed-ended question, like the Net Promoter Score. For example, quantify what users say they would improve on a website, then show what these customer's Net Promoter Scores are.

    An example is shown in Figure 2 below. There were 110 comments in total, but to quickly identify what to focus on, we can see that comments related to website navigation and product filters are both high in frequency and come from users that are likely generating negative word of mouth (notice the negative NPS). In contrast,  design/layout comments and advertisements while high in frequency appear to be minor issues for the users.

    Figure 2: Combining open-ended comments about what to fix on a website with those users' Net Promoter Scores. Comments related to Navigation and Product filters are both high in frequency and come from the more dissatisfied users. Error bars are 90% confidence intervals.

I'm not advocating quantifying data for an exercise in counting. There are of course many software applications and websites which have never been exposed to any input from users. In such situations there will likely be many obvious problems that just need to be fixed, regardless of how many users encounter the problem.

But once you've picked the low hanging fruit of a neglected interface, the benefits of structuring your activities and results lend themselves to quantification, where you can derive more meaning from your methods.

The advantage of converting qualitative data into quantitative data is that the source of qualitative data--a direct encounter of the user's experience--can reveal nuances in usability, perhaps otherwise missed in more formal quantitative experiments and surveys.

Not only can qualitative data be categorized into quantities, but it can prompt further questions and discovery for usability improvement.

About Jeff Sauro

Jeff Sauro is the founding principal of MeasuringU, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 5 books on statistics and the user-experience.
More about Jeff...

Learn More

You Might Also Be Interested In:

Related Topics


Posted Comments

There are 11 Comments

September 24, 2016 | Helga wrote:

Thanks for the detailed explanation 

September 24, 2016 | Helga wrote:

Thanks for the detailed explanation 

June 5, 2016 | Chris Rogers wrote: 

June 5, 2016 | Chris Rogers wrote:

this is a good website. Friends php blog.. 

November 4, 2015 | veronica100 wrote:

hello,rnThe quintessential usability activity is watching users attempt realistic tasks and identifying what in the interface is causing problems.for more detail please visitrn [url:]Full form directory[/url] 

September 2, 2014 | tui24gyuwhgui4htuwhoyghwurthuwthu34h5r wrote:

your ok

September 2, 2014 | jasmine wrote:

this is a good website 

March 2, 2014 | kevin rono wrote:

you are okey 

September 27, 2013 | Mike wrote:

A lot of comment has been generated by the IPCC AR5 report suggesting "95% certainty that global warming is man made". I take it that this is qualitative rather than quantitative? Do you have any thoughts on the validity of such a statement?rnKind regards, Mike 

February 20, 2012 | Jeff Sauro wrote:


Yes, absolutely valid in the same way it's valid to ask the System Usability Scale at the end of the test even if you've focused on a narrow portion. Of course, as you've alluded to, if this is the user's only exposure to the product then be careful about extrapolating those scores to the entire product. For existing users of a website or product, I've found the NPS is more stable at the end of a usability test when compared to users who gave LTR from a survey. The degree to which usability tests impact both usability questionnaires and Net Promoter Scores is still the subject of some ongoing research.

February 7, 2012 | Anthony wrote:

Is it valid to ask the NPS question at the end of a usability session? Especially if the usability session has just focused on one specific task on the website? Is it reliable to ask if they'd "recommend this site to a friend or colleague" based on a tiny view of the site, and then try to compare that NPS to another company's site? (BTW, I'm hoping your answer is "Yes" b/c I'd love to have a quick way to get quantitative feedback.) 

Post a Comment


Your Name:

Your Email Address:


To prevent comment spam, please answer the following :
What is 3 + 2: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[6634 Subscribers]

Connect With Us

Our Supporters

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Loop11 Online Usabilty Testing

Use Card Sorting to improve your IA


Jeff's Books

Customer Analytics for DummiesCustomer Analytics for Dummies

A guidebook for measuring the customer experience

Buy on Amazon

Quantifying the User Experience 2nd Ed.: Practical Statistics for User ResearchQuantifying the User Experience 2nd Ed.: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to the 2nd Ed. of Quantifying the User ExperienceExcel & R Companion to the 2nd Ed. of Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download