Usability, Customer Experience & Statistics

Survey Respondents Prefer the Left Side of a Rating Scale

Jeff Sauro • September 14, 2010

Subtle changes to response items in surveys and questionnaires can affect responses.

Many of the techniques for item and scale construction in user-research come from marketing and psychology. Some topics can be controversial, sensitive or confusing and so having the right question with the right response options is important.

Attitudes about usability aren't typically controversial so you're likely to get more honest answers. Consequently, slight changes to item wording and the number of scale-steps are less likely to lead to major difference in scores. Nevertheless it's important to understand some of those effects when creating and analyzing scales in questionnaires and surveys.

While there are many caveats and exceptions when creating response items, one effect is that respondents tend to favor the left side of a response scale.   Take the following two response options:

My College has an excellent reputation
 Strongly-Disagree Disagree


My College has an excellent reputation
 Strongly-Agree Agree
Disagree  Strongly-Disagree

More students agreed with the second response option than the first [pdf].  The only difference is the order in which the response options are presented (agree or disagree first).  If you code the values from 1 to 5 for the first scale and 5 to 1 on the second scale then you'll have a higher average score on the second response option.

This phenomenon also held up when a general population rated the qualities of beer using opposite adjectives, personal distress ratings[pdf], and when rating preferences for products A vs B or B vs A.  Once again, respondents have a slight bias to items presented first (on the left side of the scale).

Examples of both scale directions can be found in usability questionnaires.  Jim Lewis's PSSUQ[pdf] goes from Agree to Disagree and the System Usability Scale goes from Disagree to Agree.

How large is the Left-Side Bias?

It's important to keep in mind that this and many other effects you get from changing wording, question direction, labeling and the number of scale steps is small. For example, a typical difference is something like .2-.3 of a point difference (on a 5-point scale) or about 1/3 of a standard deviation difference.

You won't start seeing these differences until your sample size exceeds 100 or so.  As with most effects on response scales, the bias is not universally present in all scales[pdf] and appears to occur more when the item being rated is phrased positively.

When measuring attitudes toward usability (which is usually not a sensitive or politically charged subject) it is usually the case that the effects of unusable interfaces outweigh nuances in questionnaire design. For example, using extremely worded items or questions will have a much larger impact on the responses.

Why the Bias?

Research suggests that it is something about both the participants and the items that cause the left-side bias. It is hypothesized that it has to do with participant motivation, reading habits, and education level in conjunction with a primacy effect, the clarity of the items and specificity of situations.

Key Take-Aways:

  • A dishonest researcher who wants responses to be slightly higher in agreement can place the favorable response options on the left.
  • If you report top-box or top-two box for a stand-alone survey (no comparisons) then putting agree on the left-side will inflate the response a bit.
  • If you are comparing the responses to past or future responses, don't worry—whatever bias exists in the responses it will occur in both surveys. Comparisons are always more meaningful than stand alone results.
  • You will only likely notice a difference if your sample size exceeds 100 responses in each group.
  • One is not necessarily right or wrong—if you have an existing scale stick with it.


  • Chen, J. (1991) "Response-Order Effects in Likert-Type Scales" Educ. and Psychological Measurement; v51 pp531-540
  • Holmes, C. (1974), "A Statistical Evaluation of Rating Scales," Journal of the Market Research Society, 16 (April), 87-107.
  • Friedman, H. & Amoo, T., (1999) Journal of Marketing Management, Vol. 9:3, Winter 1999, 114-123.
  • Friedman, H. H., P. J. Herksovitz and S. Pollack, (1994) "Biasing Effects of Scale-Checking Styles on Responses to a Likert Scale," Proc, of the American Statistical Association Annual Conference: Survey Research Methods, pp. 792-795
  • Weng, L., Cheng, C., (2000) "Effects of Response Order on Likert-Type Scales" Educ. and Psychological Measurement; v60; 908

About Jeff Sauro

Jeff Sauro is the founding principal of MeasuringU, a company providing statistics and usability consulting to Fortune 1000 companies.
He is the author of over 20 journal articles and 5 books on statistics and the user-experience.
More about Jeff...

Learn More

You Might Also Be Interested In:

Related Topics

Rating Scale, Survey, Questionnaires

Posted Comments

There are 8 Comments

August 19, 2016 | lara wrote:

I like the valuable info you supply in your articles. I will bookmark your blog and check once more right here frequently. I’m rather sure I’ll be told a lot of new stuff right here! Good luck for the following!rn 

August 18, 2016 | Deletion Expert wrote:

Deletion Expert

Time-Line of Expected Results

We use a proprietary and systematic approach to get FAST Results for our Clients. We do all of the heavy lifting for you and your only task will be to watch your Credit Scores climb as negative items are removed. 

July 31, 2016 | Wawan wrote:

love this web. very nice, creative and interesting, so thanks for sharing and we look forward to posting more articles dan 

November 23, 2010 | Jeff Sauro wrote:

There's not anything necessarily wrong with using 1-4 in a rating scale. This research makes it clear that the left-side will generally get a higher response despite its labels or numbers.

Including or not-including a middle or neutral response option is the subject of much debate and research and I'll have more to say about it in a subsequent blog post.  

November 22, 2010 | Beverly Taylor wrote:

what's wrong with using 1 - 4 and not giving people a 'middle/ok' choice? 

September 19, 2010 | tedd wrote:

Nice article. I wonder if the bias is due to most web sites having left navigation, or something tied to dominate right/left-hand orientation, or a instinctual built-in preference for left items being approved more so over right, or if is this tied to a left-to-right custom writing style. Lot\'s of things to consider.

Not so much a comment about the article, but rather a comment about the page. The page fails w3c validation big time. Additionally, the first post demonstrates that the users submitted data was stored in the database with html entities escaped (good) but shown to the public in raw form (bad). These are simply examples of bad coding. If you want a further explanation, please contact me. 

September 18, 2010 | Jeff Sauro wrote:

That's a good question and probably very relevant. The research I cite here is for both English and non-English speakers in the US, Europe and Asia, however I believe all languages represented read left to right.

I suspect for a right-to-left language we'd see an opposite effect (which is what I believe you're wondering). For example, the study Belson, W.A. (1966), "The Effects of Reversing the Presentation Order of Verbal Rating Scales," Journal of Advertising Research, 6 (December), 30-37 found a top-sided bias when the scales were presented vertically.

A frequent hypothesis for why this bias exists has to do with the distance from the initial focus of reading the question to responding. In left-right then one would expect the bias to occur on whatever item is closest to the last eye-position of the question. 

September 18, 2010 | Rob Crowther wrote:

Is there any research which suggests the effect is reversed for populations who read right to left? 

Post a Comment


Your Name:

Your Email Address:


To prevent comment spam, please answer the following :
What is 5 + 5: (enter the number)

Newsletter Sign Up

Receive bi-weekly updates.
[6203 Subscribers]

Connect With Us

Our Supporters

Loop11 Online Usabilty Testing

Use Card Sorting to improve your IA

Userzoom: Unmoderated Usability Testing, Tools and Analysis


Jeff's Books

Customer Analytics for DummiesCustomer Analytics for Dummies

A guidebook for measuring the customer experience

Buy on Amazon

Quantifying the User Experience: Practical Statistics for User ResearchQuantifying the User Experience: Practical Statistics for User Research

The most comprehensive statistical resource for UX Professionals

Buy on Amazon

Excel & R Companion to Quantifying the User ExperienceExcel & R Companion to Quantifying the User Experience

Detailed Steps to Solve over 100 Examples and Exercises in the Excel Calculator and R

Buy on Amazon | Download

A Practical Guide to the System Usability ScaleA Practical Guide to the System Usability Scale

Background, Benchmarks & Best Practices for the most popular usability questionnaire

Buy on Amazon | Download

A Practical Guide to Measuring UsabilityA Practical Guide to Measuring Usability

72 Answers to the Most Common Questions about Quantifying the Usability of Websites and Software

Buy on Amazon | Download