Comparison of Usability Testing Methods

Jeff Sauro, PhD

There was a time when we spoke of usability testing it meant expensive labs and one-way mirrors.

Not anymore.

There are three core ways of running usability tests.

Each has their advantages and disadvantages.

  • Lab-Based: This is the classic services to usability testing: users physically come to a lab, often with a one-way mirror and are observed by a team of researchers.
  • Remote Moderated: Users log into screen sharing software like GoTo Meeting and attempt tasks.
  • Remote Unmoderated: Software like MUIQ, Loop11 or Webnographer walk participants through tasks and click paths are recorded.

Here are some additional considerations :

Testing Method

Attribute

Lab-Based

Remote Moderated

Remote Unmoderated

Geographic Diversity Poor: Limited to 1 (or a few) Locations Good: Users from across US and Globe can participate. TimeZone Difference is main drawback for international studies. Good: Users from across US and Globe can participate for times that are convenient to them.
Recruiting More difficult because the geographic pool is limited to the testing location. Easier because no geographic limitation but sessions are still longer. Easiest because no geographic limitation, shorter sessions.
Sample Quality Good-Excellent: Limited to People willing to take time out of day. Tight control over user activity. Good-Excellent: Able to recruit specialized users at minor inconvenience and can view most interactions. Fair-Good: Often attracts people who are in it for the honorarium or people who try and game the system.
Qualitative Insights Excellent: Direct observation of both interface and user reactions. Facilitator can easily probe issues. Good: Direct observation of interface and limited user reactions. Facilitator can ask follow up questions and engage in a dialogue. Fair-Good: If session recorded then direct observation of interface.
No recording: Insights are gleaned from answers to specific questions.
Sample Size More Restricted due to geographic limitation and time. Less Restricted: Restricted by time to run studies but more flexible hours of scheduling. Least Restricted: Easy to Run Large Sample Sizes (100+).
Costs Most Expensive: Higher compensation costs for users and facilitator time. Less Expensive: User compensation is lower and requires less facilitation time and no facility costs. Least Expensive: Compensation is least expensive, doesn’t require facilitation or facility costs.
Metric Quality Excellent: You can collect almost any measure (including eye-tracking) and task time. Good-Excellent: Some metrics are limited (eye-tracking) but task-time data can still be collected. Good: Because you don’t know what users are doing.
Reported Usage by User Researchers* 52% 50% 23%
Growth in Method* Flat 19% Increase 28% Increase

* Data come from the 2011 UPA Survey

No one method is always best. A combination of methods provides a more comprehensive picture of the user experience. For example, I often combine a few lab-based participants or remote moderated participants when I conduct an unmoderated study. It provides the best of both worlds–rich interaction and discussion with larger sample sizes and a more diverse and representative group.

Combining is not always an option. In my experience the two biggest drivers of the method chosen are budget and sample size. If you want to test a lot of users (or test several user groups) but have a limited budget then remote unmoderated testing is usually the way to go. Conversely, for mobile testing, it’s still largely a lab-based evaluation to capture swipes and screens.

To help guide what method to use consider what factors are most important in your research:

  • Could the product or website being tested see significant benefits by drawing responses from an international audience?(moderated remote)
  • Does the interface being tested require a more in-depth look at direct, in-person responses? (lab)
  • Is a single function being evaluated, where simple answers will satisfy simple questions? (unmoderated remote)
  • Are the tasks closed-ended and easy for participants to understand and attempt? (unmoderated remote)

More reading :

 

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top