Seven Ways to Test the Effectiveness of Icons

Jeff Sauro, PhD

For as long as user interfaces have had icons, there have been strong opinions about what makes an effective icon.

From the business analyst to the CEO, we all like to tell the designer what’s “intuitive” and what’s “terrible.”

Instead of making decisions based on the pay grade of the people in a meeting, consider using some data driven approaches to make better decisions.

While internal acceptance, branding and style are all important considerations, the true arbiter of success is how well an icon conveys its meaning.

Sure, labels on icons do an amazing job of complementing an image. But with interfaces used in many countries, icons often need to be designed for multiple languages and fit into cramped interfaces, making labels a less viable option.

In our experience, there isn’t a single-one-shot test for determining if an icon should stay or go. Instead, a multi-test approach tends to identify strengths and weaknesses to meet most needs.

Icons can be easily tested in an unmoderated or moderated usability test with both small and large sample sizes. Here are seven ways to help determine if your icons are making the interface more efficient or simply adding clutter.

  1. Association: Provide an icon and present participants with the intended function along with three incorrect options displayed in randomized order. Count the number of correct selections and use confidence intervals to determine the percentage of all users that would likely make the intended association.
  2. Reverse Association: Provide an icon definition and present participants with four possible images to associate with in randomized order. Use confidence intervals again to estimate the effectiveness for the entire user population. For example, if 30 out of 39 users correctly selected the right association, you can be 90% confident between 65% and 86% of all users will also make the association.
  3. Recall: Display an icon for a few moments and then ask participants to write the words or functions they remember about the icon. Summarize the results in word clouds and count up the number of times key words were listed.
  4. Free Response: Show participants an icon and ask them to list words or phrases they associate with the icon. For example, in one test we wanted to know whether users would associate an icon with a parent/child relationship (not what we intended) or with primary/secondary assignments (what we intended). In counting up the response from 61 participants, the word “child” or “parent” was used 29 times. The 90% confidence interval shows solid evidence that at least 37% of all users would make the incorrect association. This icon was rejected for an alternative.
  5. Recounting : Have participants describe the icon as if they were explaining it to a friend. For years I’ve explained to my grandmother I talk with customers to measure the usability of websites and software in a lab or over the phone. She tells her friends that I’m in telemarketing. Hearing how users would distill your icon’s meaning into more universal terms helps you understand the message being delivered (or not delivered).
  6. In Context vs. Out of Context: Icon meanings are often only understood in the context of other icons or in the application that’s being tested. In the context of an HR application, an icon with a dollar sign can mean something different than the same dollar sign in an accounting application. In general, we’ve found that just having a small screen-shot with the icon in the context of how it will be seen provides a slightly more reliable result than an icon in isolation. Of course, testing an icon out of context also has value when you want to know how well the correct association will be made,on marketing materials or in a suite of applications where the context changes or is unknown.
  7. Time to Locate: Ask participants to find a function or complete an action using the icons as presented in the context of an application. This is more like a classic findability study or click test. Time how long it takes users to successfully click on the correct icon as a measure of success.

With enough testing and refinement, perhaps your icons will go on to be one of the famous outdated visualizations like the disk drive or Rolodex that we can’t shake in our interfaces.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top