Usability, Customer Experience & Statistics

Five User Research Mistakes to Avoid

Jeff Sauro • May 1, 2013

There are a lot of mistakes that can be made when conducting any type of research.

But almost all research contains some mistakes in methodology, measurement or interpretation.

Rarely do the mistakes render the research useless. 

To help make your next user research endeavor more useful, here are five common mistakes to avoid.

1. Usability tests that are actually feature reviews: If you ask users to spend some time exploring an application or website and give their opinion, they will be happy to oblige. While this type of feedback is better than nothing, it's not a usability test.

In the precious time you schedule with users, don't treat it like a design review. Instead, have them use the application like they would if no one was around.  While a few early adopters will relish exploring features and screens, most users have just a few things they want to accomplish in the software they download, the app they install or the website they visit. Be sure you have users attempt to accomplish a task, not just poke around on some screens.

2. Failing to have task success criteria and collect metrics:  Even if you test with only five users on an early stage prototype, you should do at least three things.
  1. Have tasks with clearly defined successful criteria: Don't just have users explore a feature. Have users attempt realistic tasks and define what's an acceptable outcome.
  2. Count the frequency of usability problems: You should document the problems in the interface you observe and also record which users encountered which problem instead of simply reporting that the problem was observed.  We've consistently been deceived by our memories when reviewing videos of sessions between usability tests. Some problems seemed to happen to all users, yet when we count up all the occurrences, we'll see that 3 out of 8 users actually had the problem.
  3. Compute a completion rate : If 1 out of 5 users complete a task, report the 20% completion rate along with a 90% confidence interval  of 3% to 59%. The confidence interval helps change the conversation from "your sample size is too small" to knowing that it's very likely that a majority of all users will not be able to complete the task and it's worth fixing now.
3. The denominator problem:  It's easy to get obsessed over conversion rates when running A/B testing or optimizing a website for increased sales. Sure, conversion rate is a great metric but it's not the only metric that can be used. If one design element generates a higher conversion rate but lowers the total number of sales or reduces the average sales price, have you really optimized the site? 

Additionally, huge spikes from promotions, media attention or seasonality that drive large traffic spikes will often reduce the conversion rate (larger denominator) but increase sales. This is a good thing. Consider combining conversion rates and revenue into one metric, or look at both when determining which treatment is really the best.

4. Not having a comparison: One of the best ways to provide meaning to metrics is answering the question "Compared to what?" If 46% of users can find a sewing machine on a department store website, it sounds like horrible findability. But if only 10% could find the same sewing machine prior to the redesign, this is a findability improvement. If it takes users two minutes to find the nearest rental location on, is that too long?  Perhaps, but not when compared to, which took 200 seconds (67% longer). Usually the most meaningful comparison is comparing a product to itself over time or perhaps to similar products within the same company. So don't stress too much over not having access to competitive data.

5. Obsessing over demographics: When it comes to usability testing, we've consistently found that the biggest differentiator in usability metrics is not demographics differences, but whether users have prior experience or are more knowledgeable about a domain or industry. This is especially the case if it's a specialized domain or a product requiring special skills, such as accounting or ERP software. Gender, age, geography and income often get center stage when discussing recruiting and when presenting the results. It's understandable. People want to know that the research is based on the right people.

But when it comes to the type of behavior we see in usability testing, actions tend to cross classes of people. If you are designing snow shoes, you don't want to test surfers, but if you are a researcher in Hawaii, you'll still be able to tell if the shoes won't fit an average person. Be concerned with demographics, but not obsessed.

About Jeff Sauro

Jeff Sauro is the founding principal of MeasuringU, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 5 books on statistics and the user-experience.
More about Jeff...

Learn More

You Might Also Be Interested In:

Related Topics

Usability, Methods, User Research

Posted Comments

There are 2 Comments

June 28, 2013 | Derek Keevil wrote:

I would argue that you're confusing User Research with Usability Testing and Optimization. From my point of view they have very different goals and come at very different phases of most significant projects. Most of your "mistakes" might be problems in a usability test, but are actually helpful in user research. rnrnStarting with usability testing on a current product might help in a project with a limited scope of optimization or "improving usability," but if you're doing product design or redesign, starting with usability testing means you'll likely miss most of the biggest opportunities for improvement, severely limit creative solutions, and miss your greatest opportunities for success.rnrnAdmittedly, you're looking at this from a very ecommerce-centered point of view, which is not my area, but here's my look at your individual "mistakes…"rnrn1. In user research using a rough prototype as a conversation piece can help you learn a lot about your users. Far more than "feature reviews," you can quickly get insight into whether your basic concept is valid, whether you're working in the proper mental models, and even whether the problem you're attempting to solve is the correct one. By users telling you how wrong you are, you can learn a lot.rnrn2. Success criteria are very constricting in user research. It's great to have goals and tasks to guide the research, but actually defining success criteria can get you focused on the wrong thing at this phase of a project, and prevent you from seeing the forest for the trees. In user research you're trying to learn about the user—not the interface—therefore there are no success or failure metrics.rnrn3. Your use of the term "Denominator" is somewhat confusing, but I think that you're just saying "getting more people to your site doesn't help if they don't buy something." I'm confused how lowering the price of an item or getting people to your site are usability issues. rnrn4. Again, you're talking usability testing here, not user research.rnrn5. I agree that demographics are overrated—especially in usability testing. In user research, however, you definitely need to make sure your researching the right people, but gender and age have very little to do with that. rn 

May 2, 2013 | Katie wrote:

"When it comes to usability testing, we've consistently found that the biggest differentiator in usability metrics is not demographics differences, but whether users have prior experience or are more knowledgeable about a domain or industry."

Completely agree Jeff! Any best practices on how to recruit for prior experiences/domain knowledge? Would you recommend recruiting both new users and more knowledgeable users?

Love to hear your thoughts on this, thanks! 

Post a Comment


Your Name:

Your Email Address:


To prevent comment spam, please answer the following :
What is 3 + 1: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[6388 Subscribers]

Connect With Us

Our Supporters

Loop11 Online Usabilty Testing

Userzoom: Unmoderated Usability Testing, Tools and Analysis

Use Card Sorting to improve your IA


Jeff's Books

Customer Analytics for DummiesCustomer Analytics for Dummies

A guidebook for measuring the customer experience

Buy on Amazon

Quantifying the User Experience 2nd Ed.: Practical Statistics for User ResearchQuantifying the User Experience 2nd Ed.: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download