When a Survey is the Better Research Method

Jeff Sauro, PhD

Have you taken a terrible survey?

Or, perhaps you were on the receiving end of the results?

Too long.
Leading questions.
Poor response options.
Overgeneralized findings and misinterpreted data.

There’s no doubt that surveys can be overused and abused. Maybe you’ve even thought about abandoning them altogether.

But is that abuse greater in surveys than in other research methods? Or is it just that surveys are so common, that we’re more likely to see a poor survey than another method?

There are times when a survey isn’t the right tool for the job. Good qualitative inquiry is often the better approach to inform design changes. It does an amazing job of telling you “why.” For example, when you want to understand how people interact with an interface, it’s usually best to watch them, not survey them, about what they think.

But the same problems that come with surveys could also be applied to qualitative research methods, like customer interviews, usability tests, and focus groups, too. Poorly executed research can span any method.

Surveys are no more a replacement for qualitative research than a hammer is for a drill. I’ve seen a lot of people misuse drills. But that doesn’t mean we throw the drill out of the toolbox! When used properly, the survey is a valuable tool to inform design decisions.

When it comes to UX methods, think AND instead of OR. A mixed-methods approach that balances quantitative results with qualitative insights is the most effective applied research methodology: you can get the “why” and the “how many.”

In fact, one of the steps in the process for creating items in a survey (usually part of an instrument or scale) is to do a cognitive walkthrough with your target group—a good mix of qual and quant.

Here are five examples of when a survey (done properly) is an ideal method to answer questions that inform design:

  1. Identifying your users. Personas and segmentation analysis depend on getting a good representative sample of your customers to understand them. It’s far more efficient and effective to use website intercepts or emailing surveys to customers rather than individually interviewing a few dozen customers. Interviewing and observation provide rich data and should be included in the process of understanding your users (mixing methods). But some core data-points like age, gender, purchase frequency, and geography are easily discovered with a survey.
  2. Identifying the most important features or content. Running a survey with a top-task analysis question is one of the most effective ways to differentiate the important from the trivial. No product, website, or app can do everything well. Methods like a top-tasks analysis provide a prioritized list of customer goals.
  3. Benchmarking attitudes. Attitudes affect behaviors. Knowing what people think about a brand, product, or website matters. How those attitudes (like usability, loyalty, utility) compare to other products or experiences goes a long way to understanding why people do or don’t purchase or use a product. While anyone can write a survey and ask poor questions, there are scientific ways to establish the validity and reliability of both your items and scales.

    While customer satisfaction metrics won’t always correlate highly to stock-market returns or revenue growth, it’s often a case of range restriction rather than a lack of association. We’ve correlated customer satisfaction with cellphone return rates; enterprise software renewal rates and SUS with task completion. When in doubt, use a validated scale like the SUPR-Q or SUS.

  4. Identifying key drivers of a product or experience. Want to know why customers aren’t purchasing, or which of 50 feature requests to include in the next version? Using multiple regression analysis or conjoint analysis allows you to tease out subtle patterns in responses from surveys. This doesn’t replace direct observation or negate the need for interviews, but neither of these approaches can differentiate often subtle, yet important, differences in attitudes.
  5. Finding the likelihood to repurchase or recommend. People’s ability to predict their future behavior is notoriously bad. That doesn’t mean you don’t ask—it means you don’t bet the bank on it. How likely are you to renew your cellphone service, repurchase the same car brand, or stay at the same hotel again? While the context and time frame affects how accurate customers’ answers are to such questions, measuring these attitudes about future behavior is a quick way to get a pulse on what’s likely to happen.

    It’s even better when you’re able to validate the attitudes with actual behavior (like seeing what percentage of people eventually did recommend or purchase). To make any measure more meaningful, answer “Compared to what?“. While the actual repurchase rate will differ from the intended repurchase rate, you should pay more attention to it when the intended rate goes down by 30% year over year.

Summary

Surveys are an essential research tool just like qualitative observational research is. Knowing how to use the methods correctly and in the right combination takes practice. While most people aren’t plumbers, no doubt many homeowners attempt to fix a leak and fail. But when that doesn’t go well, don’t blame the wrench!

I know from my personal experience trying to fix leaks, I usually end up calling the plumber anyway. While you don’t need to be an expert to use a survey, it helps to have one around. Next time you need to conduct a survey, give us a call; we won’t even have to shut off your water!

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top