Usability, Customer Experience & Statistics

How to Summarize & Display Survey Data

Jeff Sauro • July 2, 2013

Surveys are one of the most cost effective ways of collecting data from current or prospective users. 

Gathering meaningful insights starts with summarizing raw responses. How to summarize and interpret those responses aren't always immediately obvious. 

There are many approaches to summarizing and visually displaying quantitative data and it seems people always have a strong opinion on the "right" way.

Here are some of the most common survey questions and response options and some ways we've summarized them. We'll cover many of these approaches at the Denver UX Bootcamp.

Binary Responses

If a question has only two possible response options (e.g., Male/Female, Yes/No, Agree/Disagree) then it is a binary (also called dichotomous) response option. Both options, when added, equal 100%. When summarizing just the sample of respondents, such as the percent of women who responded, you can use the ubiquitous pie graph.


 Or you could go with something a bit more USA Today:

However, when you want to estimate the percent of users in your entire user population (or at least out of those who are likely to participate in your survey) who would agree with a statement, then you'll want to use confidence intervals around the percentage. The graph below shows the percentage of the 100 respondents that agreed to a statement.


The black error bars show us how much we can expect this percentage to fluctuate, if we sample a higher number of participants, or even the entire population.

Rating Scales

Rating scale questions can be those that explicitly ask participants to rate their level of agreement or satisfaction from 1 to 5, 1 to 7 or any bounded number range. You can also take questions that have ordered categories and assign numbers. For example, strongly disagree to strongly agree becomes 1 to 5.

Then the average score for one item can become a 4.2 for example.

For making comparisons between questions (such as between different websites), find the mean, standard deviation and total number of responses for all questions and compute the confidence interval. Display them side-by-side along with error bars (shown below).


Single Select

If participants are asked to pick one choice out of a number of alternatives, then this is a single-select response option. We summarize the proportion that chose each category. If you just want to summarize the responses in the survey, such as with income, a pie graph again will often suffice.

If, however, you want to estimate the prevalence of one response for the entire population (which is more common) to determine if it is statistically higher than another, then using bar graphs with confidence intervals will be more helpful.

For example, if respondents were asked the primary method by which they pay their online bill (laptop, smartphone, tablet or desktop), we can summarize this single-select question below.

The total percentage selecting any category will add up to 100%. The percentages tell us the percent of respondents that selected the option, and the confidence intervals (black error bars) show us how much we could expect the percentages to fluctuate, if we were to sample all users (or even a much larger sample). 

When error bars do not overlap, there is statistical significance. For example, even if thousands more participants responded to the survey in this example, it's highly improbable that more participants pay their bill using a tablet versus using a smartphone.

Multiple Select

If participants are allowed to select "all that apply," the total number of responses will add up to more than 100%. We can still summarize the proportion selecting using the binary confidence intervals used in the single-select method.

For example, the following graph shows the percent of respondents who selected each attribute they found important when looking for a computer to purchase.

We can see that features were the most selected item, with 44% of the respondents selecting it. The confidence interval tells us that even if we had a much larger sample size, it's extremely unlikely that features would become a less popular attribute than the others presented. The lower boundary of the error bar is still above the next choice (price).

Net Promoter Questions

The Net Promoter Question (How likely is it that you'll recommend a product to a friend?) is a rating scale, but is usually presented as a difference between two paired proportions--the proportion of promoters minus the proportion of detractors.  An example NPS score of 46% is shown below.

You can also compare the proportion of promoters, passives and neutrals separately to, say, an earlier year using confidence intervals again.

Forced Rank Questions

If you ask participants to rank options, such as the aspects they find the most important when purchasing a computer, it is a forced rank question. These look like rating scales but have different properties, called ipsative data, as each respondent's score will add up to a fixed number (e.g. if there are six options to rank, each user's responses will add up to 6+5+4+3+2+1 =21). Typically, you want to know which option has the statistically lowest rank (where lower numbers mean higher ranks) and we can also display the average rank with confidence intervals.

Processor in this case is the most important attribute, although not statistically distinguishable from price.

Open-Ended Comments

Most surveys will include at least one open-ended question. For example, what's one thing would you improve on a website?  While there are automatic ways of summarizing comments, such as matching algorithms or word clouds, we find taking the time to sort them with one or more analysts generates the best insights.

Once you have categories, you can then find the percentage of comments that fall into each group and even put confidence intervals around these. The graph below shows the most common categories derived from 110 participants' open comments on what they would improve on their health provider's website along with confidence intervals around the frequency.


About Jeff Sauro

Jeff Sauro is the founding principal of MeasuringU, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 5 books on statistics and the user-experience.
More about Jeff...

Learn More

You Might Also Be Interested In:


Posted Comments

There are 6 Comments

July 13, 2016 | Shaun Sheridan wrote:

Thanks Jeff exactly the information I needed. 

July 5, 2016 | used laptops in hyderabad wrote:

Hey, Your post is very informative and helpful for us. rnIn fact i am looking this type of article from some days.rn Thanks a lot to share this informative article.rnrnThanks for sharing this valuble information and it is very useful for me and also who wantsrn 

June 2, 2016 | Kristie wrote:

Am working with someone else's survey data and I wasn't sure how to go about all the open ended comments. Makes sense! 

February 6, 2015 | Jamie wrote:

Thanks for this, Jeff. Really insightful, and has given me something to think about as I design my results page. 

November 19, 2014 | Anonymous wrote:

Thanks for putting it all together. This would help me to design my next survey using <a href="">SoGoSuvey</a>.  

July 13, 2013 | crystal beads wrote:

I also like Flash, however I am not a good designer to design a Flash, but I have computer software by witch a Flash is automatically created and no more to hard working. 

Post a Comment


Your Name:

Your Email Address:


To prevent comment spam, please answer the following :
What is 2 + 3: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[6389 Subscribers]

Connect With Us

Our Supporters

Use Card Sorting to improve your IA

Loop11 Online Usabilty Testing

Userzoom: Unmoderated Usability Testing, Tools and Analysis


Jeff's Books

Customer Analytics for DummiesCustomer Analytics for Dummies

A guidebook for measuring the customer experience

Buy on Amazon

Quantifying the User Experience 2nd Ed.: Practical Statistics for User ResearchQuantifying the User Experience 2nd Ed.: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download