Jeff Sauro • July 2, 2013

Surveys are one of the most cost effective ways of collecting data from current or prospective users.Gathering meaningful insights starts with summarizing raw responses. How to summarize and interpret those responses aren't always immediately obvious.

There are many approaches to summarizing and visually displaying quantitative data and it seems people always have a strong opinion on the "right" way.

Here are some of the most common survey questions and response options and some ways we've summarized them. We'll cover many of these approaches at the Denver UX Bootcamp.

Or you could go with something a bit more USA Today:

However, when you want to estimate the percent of users in your entire user population (or at least out of those who are likely to participate in your survey) who would agree with a statement, then you'll want to use confidence intervals around the percentage. The graph below shows the percentage of the 100 respondents that agreed to a statement.

The black error bars show us how much we can expect this percentage to fluctuate, if we sample a higher number of participants, or even the entire population.

Then the average score for one item can become a 4.2 for example.

For making comparisons between questions (such as between different websites), find the mean, standard deviation and total number of responses for all questions and compute the confidence interval. Display them side-by-side along with error bars (shown below).

If, however, you want to estimate the prevalence of one response for the entire population (which is more common) to determine if it is statistically higher than another, then using bar graphs with confidence intervals will be more helpful.

For example, if respondents were asked the primary method by which they pay their online bill (laptop, smartphone, tablet or desktop), we can summarize this single-select question below.

The total percentage selecting any category will add up to 100%. The percentages tell us the percent of respondents that selected the option, and the confidence intervals (black error bars) show us how much we could expect the percentages to fluctuate, if we were to sample all users (or even a much larger sample).

When error bars do not overlap, there is statistical significance. For example, even if thousands more participants responded to the survey in this example, it's highly improbable that more participants pay their bill using a tablet versus using a smartphone.

For example, the following graph shows the percent of respondents who selected each attribute they found important when looking for a computer to purchase.

We can see that features were the most selected item, with 44% of the respondents selecting it. The confidence interval tells us that even if we had a much larger sample size, it's extremely unlikely that features would become a less popular attribute than the others presented. The lower boundary of the error bar is still above the next choice (price).

You can also compare the proportion of promoters, passives and neutrals separately to, say, an earlier year using confidence intervals again.

Processor in this case is the most important attribute, although not statistically distinguishable from price.

Once you have categories, you can then find the percentage of comments that fall into each group and even put confidence intervals around these. The graph below shows the most common categories derived from 110 participants' open comments on what they would improve on their health provider's website along with confidence intervals around the frequency.

9 Biases That Affect Survey Responses

Pros and Cons of Requiring Survey Responses

4 Things UX Research Tells You that Google Analytics Doesn't

4 Principles to Help Innovate and Improve the Customer Experience

5 Examples of Quantifying Qualitative Data

Why you only need to test with five users (explained)

8 Ways to Show Design Changes Improved the User Experience

Confidence Interval Calculator for a Completion Rate

A Brief History of the Magic Number 5 in Usability Testing

10 Things to Know about Usability Problems

What five users can tell you that 5000 cannot

How to Conduct a Usability test on a Mobile Device

The Five Most Influential Papers in Usability

Nine misconceptions about statistics and usability

How common are usability problems?

97 Things to Know about Usability

.

Customer Analytics for DummiesA guidebook for measuring the customer experience Buy on Amazon | |

Quantifying the User Experience 2nd Ed.: Practical Statistics for User ResearchThe most comprehensive statistical resource for UX Professionals Buy on Amazon | |

Excel & R Companion to Quantifying the User ExperienceDetailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R Buy on Amazon | Download | |

A Practical Guide to the System Usability ScaleBackground, Benchmarks & Best Practices for the most popular usability questionnaire Buy on Amazon | Download | |

A Practical Guide to Measuring Usability72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software Buy on Amazon | Download |

.

.

.