Survey Items Should Include A Neutral Response: Agree, Disagree, Undecided?

Jeff Sauro, PhD

Few things tend to generate more heated debate than the format of response options used in surveys.

Right in the middle of that debate is whether the number of options should be odd or even. Odd numbered response scales include a neutral response whereas even ones do not.

Research generally shows that including a neutral response will affect the distribution of responses and sometimes lead to different conclusions.

However, it is less important when assessing usability as you’re usually more concerned about comparisons over time or against a benchmark than the percent of users who agree to statements.

The Consequences of Neutral

The impetus behind not using a neutral response option is that researchers are concerned that having a neutral point attracts respondents who actually slightly lean toward a favorable or unfavorable response. Having a neutral response would then mask these sentiments.

With an even number of options respondents are forced to decide whether they think favorably or negatively toward an item.

Some early research by Presser and Schuman (1980) found that typically between 10-20% of respondents chose the neutral option when it was provided compared to the same survey when it wasn’t. Their research was conducted using more politically sensitive questions:

  • Tendencies on political issues: Liberal to Conservative
  • Federal Funding for Schools
  • Penalties for Marijuana Use

In fact, Presser and Schuman did find that the biggest shifts came for the political tendency scale (liberal to conservative). Then as now, there is a certain social acceptability to being “middle-of-the-road” politically.

Such neutral options provide an easy out for respondents who are less inclined to express their opinion, but potentially mean a large proportion who favor or oppose a topic aren’t counted.

Despite the major shifts seen when including or excluding neutral options, Presser and Schuman found that the distribution of responses for the items didn’t change significantly.

In other words, if there were 20% choosing somewhat-liberal without the neutral option, approximately 20% would still chose somewhat-liberal with the neutral option. The total numbers would be less, but when reporting out the proportions, they found researchers would essentially draw the same conclusion—even on these sensitive topics (or at least the ones they studied with the subjects in their study).

However, later research by Bishop (1987) did show that a researcher, or pollster, would draw different conclusions about the proportion of respondents who favor or oppose an issue based on the inclusion of a neutral response. His research also included politically sensitive topics:

  • Social Security Benefits
  • Defense Spending
  • Nuclear Power

Bishop concluded that the type of question (and the type of opinion it elicits) does matter so one should carefully consider the context and the consequences of neutral options. A view also expressed by Parasuraman (1986).

If you find yourself summarizing the proportion of respondents who favor or oppose an item, having a neutral response may matter (as will the number of response options and labels).

So what about attitudes about usability?

There are a number of standardized questionnaires that measure attitudes toward the satisfaction and usability of applications (e.g. software, hardware and websites). The most popular questionnaires and their number of response options are:

  • System Usability Scale (SUS): 5 points
  • Post-Study System Usability Questionnaire (PSSUQ [pdf]): 7 Points
  • Software Usability Measurement Inventory (SUMI): 3 Points
  • Questionnaire for User Interaction Satisfaction (QUIS): 9 Points

All four standardized usability questionnaires have an odd number of response options implying that the authors of these questionnaires believe that a neutral response is legitimate. An earlier version of the QUIS did contain an even number of items (10) but now contains an odd number.

It seems reasonable that users will genuinely have a neutral attitude about some items that ask about usability or the general satisfaction with a system. Such attitudes typically aren’t controversial like building more Nuclear Power plants so there’s less of an incentive to hide beliefs in the neutral-zone.

How common are Neutral Responses?

While I’m not aware of any research on how neutral responses affect the distribution of items in usability questionnaires, I do have plenty of data on the number of users who did chose the neutral response using the System Usability Scale (SUS), the Single Ease Question (SEQ) and the popular Net Promoter Question.

Below is a graph of the percentage of users choosing the neutral response “3” from 2,052 users across several dozen websites who responded to the 10 item SUS.


Figure 1: Percent of 2,052 respondents who chose the neutral “3” option for each of the 10 items on the SUS items (items shown below). Error bars are 95% confidence intervals and numbers along horizontal axis are the 10 items in the SUS.

  1. I think that I would like to use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this system were well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

We can see the range is responses that are neutral range from 5% to 22%. Item 5 “I find the various functions in the website well integrated” garners the most neutral responses. Perhaps this is because the item doesn’t apply as well to websites.

On the other hand, respondents are less equivocal on item 4 “I think that I would need the support of a technical person to use the website.” Again, most users probably don’t think they’ll need to call tech-support when using a website (fortunately!).

Single Items

For single item analysis, I have data from 484 users who attempted tasks across various websites and consumer and business software using the Single Ease Question (SEQ)–a 7-point rating scale with a neutral option.

On average only 9% of users selected the neutral point (shown in the graph below). One reason is that the putative neutral point is not really neutral in the minds of users. Something also seen in the 1994 meta-analysis by Nielsen & Levy where they found the mean response was a 3.6 on 5 point scales, significantly higher than a 3 (Nielsen & Levy 1994).


Figure 2: Percentage of respondents selecting the neutral option on the Net Promoter question (NPS)
and the Single Ease Question (SEQ).
Error bars are 95% confidence intervals.

The question used to compute the Net Promoter Score (How likely is it that you’ll recommend this product to a friend?”), also shown in the graph above, generated similar results as the SEQ. There are 11 response options with a neutral response. Of the 2925 responses, 8% choose the neutral option of a 5.

Why neutral doesn’t matter much

There are at least three reasons why you shouldn’t concern yourself too much about the inclusion or exclusion of neutral options in usability questions and probably questions about customer loyalty:

  1. Items are summed, averaged or combined: Analysis on questionnaires are averaged or summed across items. Having four response options or five won’t matter when you take the average. Even on single response questions like the SEQ the mean is reported. For the Net Promoter Question, 4’s, 5’s and 6’s are all considered Detractor’s, not neutral, so having an even or odd number probably won’t change scores much.
  2. Relative Comparisons are more meaningful: responses to questionnaires aren’t terribly valuable by themselves. You need to compare the scores to something meaningful. In the case of SUS for example, a score of a 68 represents an average score. Comparing it to a prior version, a benchmark or competitor provides meaning to those numbers and it doesn’t matter if the questionnaire did or did not have a neutral response on the total score as long as you use the same questions.
  3. The effects of changing response options are usually modest: In general the effects of usable or unusable applications tend to outweigh the much smaller effects of scale points, labels, scale directions, neutral responses and poorly written questions.

So the next time you find yourself debating whether to include a neutral response option in a survey item, consider that the time is probably better spent finding a meaningful comparisons like an industry benchmark.

In fact, it’s probably as important WHY users are selecting the options they are (Dumas 1998). If you do need to summarize the “percent who agree” tread lightly on interpreting those figures as many factors will affect the percentage (and can be exploited by the disingenuous pollster or researcher).

References

Bishop (1987) “Experiments with the Middle Response Alternatives in Survey Questions” Public Opinion Quarterly (1987) 51(2): 220-232

Dumas, J. (1998b). Usability Testing Methods: Subjective Measures: Part II – Measuring Attitudes and Opinions. October issue of Common Ground, The newsletter of the Usability Professionals’ Association, 4-8.

Presser and Schuman (1980) The Measurement of a Middle Position in Attitude Surveys Public Opinion Quarterly (1980) 44(1): 70-85

Parasuraman, A. (1986). Marketing research. Reading, MA: Addison-Wesley.

Nielsen, J. & Levy, J. (1994). Measuring usability: Preference vs. performance. Communications of the ACM, 37, 4, 66-75.

 

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top