The Experiment Requires That You Continue: On The Ethical Treatment of Users

Jeff Sauro, PhD

In 1963 Yale Psychologist Stanley Milgram paid volunteers $4 to “teach” another  volunteer, called the “learner” new vocabulary words.

If the learner got the words wrong, he or she received an electric shock!

Or, so the teacher/volunteer was led to believe. In fact, no shock was given, instead a person working with Milgram pretended, with great gusto, that they were being shocked.

So while no one in the study ever received electrical current, the findings of the study were shocking. Some 65% of study participants administered what they thought were 450 volts, a lethal dose of electricity!

The intent of the study was to understand how such atrocious crimes could be carried out by generally ordinary people during the Nazi period.   After the results of the study were published, attention diverted from the disconcerting outcome of the findings and on to the ethical treatment of study participants. Many participants claimed to have suffered significant psychological damage both during and after the study.

Informed Consent

While the services and findings that Milgram provided the behavioral science community continue to be debated, one clear outcome emerged from the experiments: a reform in the ethical treatment of test subjects was needed.  Researchers conducting University sponsored research now have to go through an Institutional Review Board (IRB) which vets the methods and approves or rejects the research.

What’s more, in the years since the Milgram studies it’s become necessary in academia to provide participants with what’s called an informed consent. This document is intended to let participants know the general topic of the study, that they can stop at any time, regardless of what the experiment demands.

And while user researchers aren’t Yale trained psychologists performing controlled laboratory experiments, they are collecting data from volunteers and do subject them to questioning, analysis and observation.  Although not as explicitly required in industrial settings, many companies also present study participants a type of informed consent document.  This is what we did when I worked at Oracle and Intuit. Whether they read or understood them is another question.

While we don’t always provide a formal document for participants to read and sign, as part of our in-person studies we also explain to participants the same key points from the informed consent in plain language and ask if they have questions.

But user research happens well beyond the confines of a usability lab. And users willingly and explicitly volunteer to do something, if it isn’t exactly clear what they’re volunteering for.  New technology and methods means a blurring of ethical lines in the name of better products and commercializing business models.

There is a strong demand to better meet customer needs through understanding customer behaviors.  It seems easy in hindsight to identify the Milgrams or the Zimbardos, but at what point does measuring user behavior become more sinister than sanguine?

Facebook

How does seeing friends post pictures of beaches, parties and smiling faces all over social media affect us?  Does it lead to resentment or even depression?  It’s an interesting and important psychological question that’s been debated for years.  And Facebook helped academia find out. Earlier this year, Facebook made headlines for conducting a large scale experiment on a small fraction of its billion users (700,000!).

These users unknowingly had their newsfeed manipulated for one week to present more positive or more negative postings in their timeline.   The results of the study actually showed that exposure to more positive posts resulted in users themselves producing more positive posts. The same was true of the negative posts. In other words, emotional sentiments were contagious. Good news didn’t lead to people feeling glum, it actually led them to feel better.

But like the Milgram experiments, the results were quickly overshadowed by another example of Facebook using our information to, in some sense, manipulate us.  Was this ethical?  Did Facebook go too far in the collection of information to improve its product?

Facebook discloses the latitudes they can take in their privacy policy and terms of use which itself has been the subject of controversy.  But analysis of data reveals few people read, much less understand, the implications of terms and conditions and privacy policies.  So what is Facebook’s ethical obligation?

OKCupid

Have you ever wondered if online dating websites are more effective at finding matches than say, random matches or the serendipity that happens in real life meetings?

In response to the outrage over Facebook, the dating website OKCupid admitted that, among other things, they paired up people who were poor matches according to their algorithm.  But the results of their experiment suggested that the act of telling someone that they were a good match was as important as them actually being a good match.  Users were notified that they were involved in a study and were shown the correct compatibility percentages after it was concluded.

This sort of manipulation was likely covered under the terms and conditions the users agree to when using the website.   These experiments were done to improve the product and matching for all of us. But was it unethical for OKCupid to manipulate data and people in this way?

Amazon and Orbitz

Amazon has been a pioneer of many things on the web. In 2000 it was revealed that Amazon was adjusting the pricing of some of its products based on past browsing behavior.  Depending on who you were the price you paid would differ.   More recently, Orbitz came under fire for revealing that the way it prioritizes hotels is based on data that showed Mac users were 40% more likely to book a 4 or 5 star hotel. Mac users would be shown more expensive alternatives when searching.  Is it ethical to charge different prices or change your inventory lineup based on who you are, what you own or what you’ve done?

Mint.com

Recently, the financial planning website Mint.com invited some participants to use a new beta feature that separated business and personal accounts. After a year of collecting data from users who meticulously entered financial information they turned off the feature without notice. Previously entered data and reports were no longer accessible. These actions were also permissible under the terms of use.  While computer software users have become used to the ubiquitous Beta periods, is it ethical for Mint to remove access to information produced from customer labor and private financial data?

Should the Experiment Continue?

At this point it should be clear to every internet and software user that your actions and data are being monitored and used for commercial purposes.   Like walking out in public places, our expectations for privacy are reduced. Of course the time spent and the data collected on the Internet dwarfs the typical concerns about time spent in public spaces. While this is a broad topic that can’t be settled or resolved, here are some thoughts to consider when measuring the user experience.

  • Privacy/Anonymity:  Where possible ensure both privacy and anonymity. When we collect survey data or conduct a usability study, we avoid collecting personally identifiable information unless it’s absolutely necessary. When it is, be sure the participant is informed and take effective measure to keep that information secure. The NSA has made headlines for how it has data on millions of Americans and that data is associated with names, social security numbers and addresses.  In the commercial world, real names and identities are generally less important.
  • Disclosure: At the very least it should be disclosed what companies are doing, or might do with user information. Unfortunately most language in the terms of use is vague or filled with legalese that almost anything can be done with your data.
  • Retention & Access to Customer Data:  If you collect data from customers, especially data that requires labor (like providing content) you should make that available for users to download or use in other forms. The more sensitive the data (family photos, financial information, taxes) the more important it is. If that’s not possible, make it clear up front.

 

Comprehension

Privacy, disclosure and anonymity are necessary but not sufficient for the ethical treatment of users. As user advocates we can help companies take a step further by improving what users comprehend.

Not only does writing in plain language help, but specifically testing the how well users actually understand the meaning of the terms can reveal problems.  Chauncey Wilson offered up some good tips in his Interactions article on testing consent forms and creating simplified NDAs.
Providing disclosure and ensuring some level of comprehension from plain language doesn’t mean that suddenly users will start acting differently. But what it does ensure is that for users who are sensitive to how their data is used and how the experience might be manipulated, they can opt-out.

As a statistician and researcher myself, I’m sympathetic to the need to make better decisions with data and would be hesitant to encourage some heavy handed legislation for collecting data from consumer products and data.   But I’m also a user and am sensitive to manipulations or callous use of my information.

The popular practice of A/B testing is one of the most effective ways for website owners to understand which design elements lead to higher purchases, donations or registration rates.  But at what point does A/B testing, and other techniques that manipulate customers and their data become unethical? Professional organizations like the UxPA and Direct Marketing Association do have codes of conduct which offer a good guide for researchers.

Despite the recent negative publicity around the practice, there will continue to be problems with the way companies handle our data and our decisions. It’s important to realize the somewhat ironic situation that the very technology exploited by companies like Facebook (the Internet and social media) is the same technology that quickly reveals the problems with the policies of technology and social media companies.

The results of vague and manipulative policies have ramifications. We’ve seen this with our loyalty and trust scores.  At some point bad business practices turn into unethical practices. When that happens is not always clear ahead of time. With some transparency and improved comprehension coming from the industry itself, I suspect many of these problems introduced by the technology will self-correct.   In the interim, helping test the comprehension of such policies will help.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top