Five User Research Mistakes To Avoid

Jeff Sauro, PhD

There are a lot of mistakes that can be made when conducting any type of research.

But almost all research contains some mistakes in methodology, measurement or interpretation.

Rarely do the mistakes render the research useless.

To help make your next user research endeavor more useful, here are five common mistakes to avoid.

1. Usability tests that are actually feature reviews: If you ask users to spend some time exploring an application or website and give their opinion, they will be happy to oblige. While this type of feedback is better than nothing, it’s not a usability test.

In the precious time you schedule with users, don’t treat it like a design review. Instead, have them use the application like they would if no one was around. While a few early adopters will relish exploring features and screens, most users have just a few things they want to accomplish in the software they download, the app they install or the website they visit. Be sure you have users attempt to accomplish a task, not just poke around on some screens.

2. Failing to have task success criteria and collect metrics: Even if you test with only five users on an early stage prototype, you should do at least three things.

  1. Have tasks with clearly defined successful criteria: Don’t just have users explore a feature. Have users attempt realistic tasks and define what’s an acceptable outcome.
  2. Count the frequency of usability problems: You should document the problems in the interface you observe and also record which users encountered which problem instead of simply reporting that the problem was observed. We’ve consistently been deceived by our memories when reviewing videos of sessions between usability tests. Some problems seemed to happen to all users, yet when we count up all the occurrences, we’ll see that 3 out of 8 users actually had the problem.
  3. Compute a completion rate : If 1 out of 5 users complete a task, report the 20% completion rate along with a 90% confidence interval of 3% to 59%. The confidence interval helps change the conversation from “your sample size is too small” to knowing that it’s very likely that a majority of all users will not be able to complete the task and it’s worth fixing now.

3. The denominator problem: It’s easy to get obsessed over conversion rates when running A/B testing or optimizing a website for increased sales. Sure, conversion rate is a great metric but it’s not the only metric that can be used. If one design element generates a higher conversion rate but lowers the total number of sales or reduces the average sales price, have you really optimized the site?

Additionally, huge spikes from promotions, media attention or seasonality that drive large traffic spikes will often reduce the conversion rate (larger denominator) but increase sales. This is a good thing. Consider combining conversion rates and revenue into one metric, or look at both when determining which treatment is really the best.

4. Not having a comparison: One of the best ways to provide meaning to metrics is answering the question “Compared to what?” If 46% of users can find a sewing machine on a department store website, it sounds like horrible findability. But if only 10% could find the same sewing machine prior to the redesign, this is a findability improvement. If it takes users two minutes to find the nearest rental location on Budget.com, is that too long? Perhaps, but not when compared to Enterprise.com, which took 200 seconds (67% longer). Usually the most meaningful comparison is comparing a product to itself over time or perhaps to similar products within the same company. So don’t stress too much over not having access to competitive data.

5. Obsessing over demographics: When it comes to usability testing, we’ve consistently found that the biggest differentiator in usability metrics is not demographics differences, but whether users have prior experience or are more knowledgeable about a domain or industry. This is especially the case if it’s a specialized domain or a product requiring special skills, such as accounting or ERP software. Gender, age, geography and income often get center stage when discussing recruiting and when presenting the results. It’s understandable. People want to know that the research is based on the right people.

But when it comes to the type of behavior we see in usability testing, actions tend to cross classes of people. If you are designing snow shoes, you don’t want to test surfers, but if you are a researcher in Hawaii, you’ll still be able to tell if the shoes won’t fit an average person. Be concerned with demographics, but not obsessed.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top