Usability, Customer Experience & Statistics

Blogs & Articles

Yes, of course you can. But it depends on who you ask! It

Can You Take the Mean of Ordinal Data?

Jeff Sauro • May 24, 2016

Yes, of course you can. But it depends on who you ask! It's a common question and point of contention when measuring human behavior using multi-point rating scales. Whether someone tells you it's permissible to take the average of ordinal data depends on their view of measurement theory—and not all people agree. Using the mean of ordinal data is fine; just be careful not to make interval or ratio statements about your data -- even researchers who take a more relaxed view of averaging ordinal data would disagree with that practice.[Read More]

The successful researcher, regardless of title, should understand how to combine traditional market research and UX research activities for the best results. If you

Combining UX Research with Market Research

Jeff Sauro • May 17, 2016

The successful researcher, regardless of title, should understand how to combine traditional market research and UX research activities for the best results. If you're in either role, you should understand the tools and techniques that help define what customers think and what they do—and that means blending methods and mindsets.[Read More]

When creating a survey, one important variable to consider is whether to brand the survey as coming from your organization or having a third-party research firm host and send an anonymous survey. While a branded survey is likely to increase response rates, it comes with some drawbacks too.

The Pros and Cons of a Branded Survey

Jeff Sauro • May 10, 2016

When creating a survey, one important variable to consider is whether to brand the survey as coming from your organization or having a third-party research firm host and send an anonymous survey. While a branded survey is likely to increase response rates, it comes with some drawbacks too.[Read More]

Graphs of difference scores are a helpful visualization technique to help highlight differences compared to displaying raw scores. Their main advantage can also be a disadvantage as even small differences can look important. In customer research and analysis in general, while the graph maker can help, it

Visualizing Data: Raw vs Difference Scores

Jeff Sauro • May 3, 2016

Graphs of difference scores are a helpful visualization technique to help highlight differences compared to displaying raw scores. Their main advantage can also be a disadvantage as even small differences can look important. In customer research and analysis in general, while the graph maker can help, it's ultimately up to the viewer to understand if differences depicted have an impact that's meaningful or more modest.[Read More]

Whether you

Create a UX Measurement Plan

Jeff Sauro • April 26, 2016

Whether you're introducing how to measure user experience to an organization or trying to advance the maturity of your UX practice, you need a plan for measuring and improving the user experience. With a good idea about who your users are and how to collect data from them, here's a high-level plan to start measuring and then improving the user experience in 9 steps.[Read More]

Benchmarking is part of maintaining a healthy user experience for your website, app or product. Here is a list of common points to discuss and decide when embarking on a benchmarking project.

A Checklist For Planning a UX Benchmark Study

Jeff Sauro • April 19, 2016

Benchmarking is part of maintaining a healthy user experience for your website, app or product. Here is a list of common points to discuss and decide when embarking on a benchmarking project.[Read More]

All too often efforts get stymied in a quest for perfect data, the perfect metric, or the perfect method—what a lot of people call planning paralysis. Don

Better to be Approximately Right than Exactly Wrong

Jeff Sauro • April 12, 2016

All too often efforts get stymied in a quest for perfect data, the perfect metric, or the perfect method—what a lot of people call planning paralysis. Don't let a quest for perfect data prevent you from collecting any data! Look for sound approximations that get you to a "good enough" place that accomplishes the job and answers your research questions.[Read More]

It

5 Ways to Find Out More About Your Customers

Jeff Sauro • April 5, 2016

It's fundamental to creating both a usable customer experience and a better business: you need to know who your customers are. It however can be surprisingly difficult for organizations to connect with their customers to collect information. This article describes five methods (each with their strengths and weaknesses) for collecting key demographic and psychographic information about your customers.[Read More]

Your job title doesn

6 Best Practices for Using Numbers to Inform Design

Jeff Sauro • March 22, 2016

Your job title doesn't have to be "researcher" or "statistician" to use data to drive design decisions.You can apply some best practices even when numbers aren't your best friend. Here are six best practices for using numbers to inform your design efforts that don't require a career change or advanced degree in math. [Read More]

As if the Net Promoter Score didn

Is the Net Promoter Score a Percentage?

Jeff Sauro • March 15, 2016

As if the Net Promoter Score didn't already stir up enough strong opinions about whether it's the "right" metric for organizations, now there's a new controversy: how to display it. Is it an NPS of 25% or 25? Do you add or omit the percentage sign? Here are my thoughts on this % tempest in a teapot.[Read More]

To loosely quote Lord Kelvin, when we can measure something and express it in numbers, we understand and manage it better. Measuring usability allows us to better understand how changes in usability affect customer satisfaction and loyalty. Usability can and should be measured on mobile apps, enterprise accounting software, early stage prototypes, or mature websites. While devices and users will differ, here are ten core concepts to understand when measuring usability that are likely to remain constant.

10 Essentials of Measuring Usability

Jeff Sauro • March 8, 2016

To loosely quote Lord Kelvin, when we can measure something and express it in numbers, we understand and manage it better. Measuring usability allows us to better understand how changes in usability affect customer satisfaction and loyalty. Usability can and should be measured on mobile apps, enterprise accounting software, early stage prototypes, or mature websites. While devices and users will differ, here are ten core concepts to understand when measuring usability that are likely to remain constant.[Read More]

With the proliferation of big data, the number of statistical tests we can perform seems endless. But the number of fluke discoveries we

How to Handle Multiple Comparisons

Jeff Sauro • March 1, 2016

With the proliferation of big data, the number of statistical tests we can perform seems endless. But the number of fluke discoveries we're likely to detect has increased as well. One of the better known methods for managing this false positive rate is the Bonferroni correction, however, it tends to be too conservative and introduces too many false negatives. A better approach that balances both false positives and false negatives is the Benjamini-Hochberg method which is explained with examples in this article.[Read More]

In many cases, the judgment from multiple people collected independently and then aggregated, is better than even the best individual judgment. The idea of aggregating results is a powerful methodological tool that can smooth out unusual forecasts, scientific conclusions, and judgments from experts and novices alike. It

The Benefits of Aggregating Judgment

Jeff Sauro • February 23, 2016

In many cases, the judgment from multiple people collected independently and then aggregated, is better than even the best individual judgment. The idea of aggregating results is a powerful methodological tool that can smooth out unusual forecasts, scientific conclusions, and judgments from experts and novices alike. It's the power behind meta-analysis and using the average of several polls to predict the winner of an election. It can also be applied to user and customer research as well.[Read More]

For most customer research, you

5 Steps for Better Customer Sampling

Jeff Sauro • February 16, 2016

For most customer research, you're rarely able to measure the attitudes or behaviors of everyone. Instead you take a sample of your customers and use this sample to make inferences about the rest of your customers. While sampling is efficient and statistically sound, it comes with some risks. Here are five steps to help reduce some of the risks and make sampling your customer more effective.[Read More]

Expert reviews aren

5 Steps to Conducting an Effective Expert Review

Jeff Sauro • February 9, 2016

Expert reviews aren't' a substitute for usability testing and don't provide metrics for benchmarking. But they are an effective and relatively inexpensive way to uncover the more obvious pain points in the user experience. Expert reviews are best used when you can't conduct a usability test or in conjunction with insights collected from observing even just a handful of users attempting realistic tasks on a website or application. The following five steps for conducting an effective expert review aren't going to make you an "expert" in interface evaluation immediately, but if you apply them with enough practice eventually they might![Read More]

False positives are a fact of life when trying to separate the signal from the noise in UX research. As the amount of data we use to make decisions increases, the reality of dealing with false positives does too. Two common types of false positives are phantom usability issues and illusory differences.

While we can never completely eliminate false positives, we can minimize them. In UX research this is best done by managing the false positive rate, replicating studies, and triangulating data with complementary methods and multiple evaluators.

Managing False Positives in UX Research

Jeff Sauro • February 2, 2016

False positives are a fact of life when trying to separate the signal from the noise in UX research. As the amount of data we use to make decisions increases, the reality of dealing with false positives does too. Two common types of false positives are phantom usability issues and illusory differences. While we can never completely eliminate false positives, we can minimize them. In UX research this is best done by managing the false positive rate, replicating studies, and triangulating data with complementary methods and multiple evaluators.[Read More]

Surveys are a relatively quick and effective way to measure customers

7 Survey Types to Measure the Customer Experience

Jeff Sauro • January 26, 2016

Surveys are a relatively quick and effective way to measure customers' attitudes and experiences along their journey. While customer experience surveys can take on any form, it can be helpful to think of them as falling into these seven categories: relationship/branding, segmentation, loyalty, usability (perceptions), customer satisfaction, feature prioritization, and true intent.[Read More]

Percentages are popular because even when people know little about the underlying measure, they can more easily interpret a percentage: they work for any sized sample and are generally bound from 0 to 100%. The relative risk (ratio of two percentages) is an effective way to compare the magnitude of the differences in percentages. While the term odds is in general use, it

What are the Odds?

Jeff Sauro • January 19, 2016

Percentages are popular because even when people know little about the underlying measure, they can more easily interpret a percentage: they work for any sized sample and are generally bound from 0 to 100%. The relative risk (ratio of two percentages) is an effective way to compare the magnitude of the differences in percentages. While the term odds is in general use, it's not the same thing as relative interest. The odds ratio tells you the relative difference in the odds and can sometimes generates a similar ratio as the relative interest ratio. When in doubt, use relative interest to communicate differences in percentages.[Read More]

Customer satisfaction is a measure of how well a product or service experience meets customer expectations. It includes two common levels: general (or relational) satisfaction and a more specific attribute (or transactional) satisfaction. Measuring customer satisfaction is merely the first step in understanding and improving a customer

How to Measure Customer Satisfaction

Jeff Sauro • January 12, 2016

Customer satisfaction is a measure of how well a product or service experience meets customer expectations. It includes two common levels: general (or relational) satisfaction and a more specific attribute (or transactional) satisfaction. Measuring customer satisfaction is merely the first step in understanding and improving a customer's experience. Measuring customer satisfaction helps identify the key drivers of loyalty and growth and how it differs by touchpoint and at key stages of the customer journey.[Read More]

We often think of usability testing as the only method for evaluating the usability of a website or application. There are, however, other methods that can help uncover usability problems. These methods can be broken down into empirical (usability testing, surveys, and analytics) or inspection methods (expert review, heuristic evaluation, cognitive walkthrough, and guideline review). These methods can, and should be used together. Think AND instead of OR when finding usability problems.

7 Methods for Discovering Usability Problems

Jeff Sauro • January 5, 2016

We often think of usability testing as the only method for evaluating the usability of a website or application. There are, however, other methods that can help uncover usability problems. These methods can be broken down into empirical (usability testing, surveys, and analytics) or inspection methods (expert review, heuristic evaluation, cognitive walkthrough, and guideline review). These methods can, and should be used together. Think AND instead of OR when finding usability problems.[Read More]

It was another busy year on MeasuringU.com with 49 new articles, a new book and UX Bootcamp. In 2015, our articles were served up 2.3 million times to 900,000 visitors. Thank You! We covered topics including the essentials of usability testing, finding the right sample size, and better ways of measuring the customer experience.

Top 10 UX Metrics, Methods & Measurement Articles from 2015

Jeff Sauro • December 28, 2015

It was another busy year on MeasuringU.com with 49 new articles, a new book and UX Bootcamp. In 2015, our articles were served up 2.3 million times to 900,000 visitors. Thank You! We covered topics including the essentials of usability testing, finding the right sample size, and better ways of measuring the customer experience.[Read More]

It

Are you conducting a Heuristic Evaluation or an Expert Review?

Jeff Sauro • December 15, 2015

It's been 25 years since the development of the Heuristic Evaluation method by Molich and Nielsen. Strictly speaking, in a Heuristic Evaluation, an evaluator only identifies problems when viewed through the heuristics (aka rules). There's nothing wrong with inspecting an interface without heuristics; just don't call it a Heuristic Evaluation, call it an expert review. Rolf Molich sees the method as valuable, but 99% of what he sees is an expert review being called a Heuristic Evaluation. While it may sound better to call your expert review a Heuristic Evaluation, that title implies a more narrowly defined view; doing so is like a judge writing the law.[Read More]

An analysis of self-reported versus verified task-completion rates across four studies and 838 participants found self-reported task completion rates were almost three times that of verified completion rates. The correlation was also low (r = .24) and the relative rank of competitors was slightly better but still a modest predictor. Self-reported task-completion rates are better than nothing, but mostly as a crude indicator of very difficult tasks.

How Reliable Are Self-Reported Task Completion Rates?

Jeff Sauro • December 8, 2015

An analysis of self-reported versus verified task-completion rates across four studies and 838 participants found self-reported task completion rates were almost three times that of verified completion rates. The correlation was also low (r = .24) and the relative rank of competitors was slightly better but still a modest predictor. Self-reported task-completion rates are better than nothing, but mostly as a crude indicator of very difficult tasks.[Read More]

To answer most user-research questions fundamental statistical techniques like confidence intervals, t-tests, and 2 proportion tests will do the trick. There are times however when you need more advanced techniques to  best answer the question. This article discusses five advanced techniques (regression, ANOVA, factor analysis, cluster analysis and logistic regression) when to use them and some gotchas to look out for.

5 Advanced Stats Techniques & When to Use Them

Jeff Sauro • December 1, 2015

To answer most user-research questions fundamental statistical techniques like confidence intervals, t-tests, and 2 proportion tests will do the trick. There are times however when you need more advanced techniques to best answer the question. This article discusses five advanced techniques (regression, ANOVA, factor analysis, cluster analysis and logistic regression) when to use them and some gotchas to look out for.[Read More]

There are a number of methods and metrics to measure and improve navigation. We consistently find that it

Picking the Right Methods to Improve Navigation

Jeff Sauro • November 17, 2015

There are a number of methods and metrics to measure and improve navigation. We consistently find that it's rarely a single method, or even a single phase, that addresses the research questions. Instead it's a combination of methods, usually done iteratively and over several phases, that best answers questions about how to improve navigation. Here are some of the common navigation research questions we often answer and ways they can be addressed using the above methods.[Read More]

Usability tests are the best method for uncovering what to fix in an application, but they aren

Using Surveys to Measure the User Experience

Jeff Sauro • November 10, 2015

Usability tests are the best method for uncovering what to fix in an application, but they aren't always feasible to conduct—especially when you need to measure a lot of products. A usability survey is a quick way to get a standardized measure of the user experience and gives you insight about what needs to be fixed. Keep the surveys short but collect responses to a standardized measure of the experience, detail on who the users are, the tasks they perform, and the most common problems they encounter.[Read More]

Every product, website, or design has a user interface. If it

The Importance of Evaluating UX

Jeff Sauro • November 3, 2015

Every product, website, or design has a user interface. If it's used, it has a user experience. What differentiates a good user experience from a bad one is not based on how many awards and accolades it gets; instead, it's based on superior measurable outcomes. Using the framework of defining metrics, users, and tasks, and measuring before and after changes helps ensure the user experience is quantifiably better.[Read More]

While there are dozens of methods and techniques we describe and use at MeasuringU, many of the methods are just variations and combinations of broader methods that cross the behavioral sciences. The most common of these broader methods are surveys, experiments, observations, interviews, and focus groups. Understanding the strengths and weaknesses of these methods helps you make better decisions on the right one for your research.

Picking the Right Data Collection Method

Jeff Sauro • October 27, 2015

While there are dozens of methods and techniques we describe and use at MeasuringU, many of the methods are just variations and combinations of broader methods that cross the behavioral sciences. The most common of these broader methods are surveys, experiments, observations, interviews, and focus groups. Understanding the strengths and weaknesses of these methods helps you make better decisions on the right one for your research.[Read More]

Observation is a key data collection technique for qualitative research. While it may seem like observation is as simple and uniform as watching and taking notes, there are some subtle differences that can affect the type of data you collect. The role the observer plays forms a continuum from completely removed to completely engaged with the participant.

4 Types of Observational Research

Jeff Sauro • October 20, 2015

Observation is a key data collection technique for qualitative research. While it may seem like observation is as simple and uniform as watching and taking notes, there are some subtle differences that can affect the type of data you collect. The role the observer plays forms a continuum from completely removed to completely engaged with the participant.[Read More]

When we speak about a qualitative research study, it

5 Types of Qualitative Methods

Jeff Sauro • October 13, 2015

When we speak about a qualitative research study, it's easy to think there is one kind. But just as with quantitative methods, there are actually many varieties of qualitative methods. While the methods generally use similar data collection techniques (observation, interviews, and reviewing text), the purpose of the study differentiates them—something similar with different types of usability tests. And like classifying different usability studies, the differences between the methods can be a bit blurry.[Read More]

Showing the latest 30 posts. Show All 357 Blog Posts

Newsletter Sign Up

Receive bi-weekly updates.
[5944 Subscribers]

Connect With Us

Our Supporters

Loop11 Online Usabilty Testing

Usertesting.com

Use Card Sorting to improve your IA

Userzoom: Unmoderated Usability Testing, Tools and Analysis

About Jeff Sauro

Jeff Sauro is the founding principal of MeasuringU, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 5 books on statistics and the user-experience.
More about Jeff...

.

Jeff's Books

Customer Analytics for DummiesCustomer Analytics for Dummies

A guidebook for measuring the customer experience

Buy on Amazon

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download

.
.
.