How To Estimate A Survey Response Rate

Jeff Sauro, PhD

How many people will respond to your survey?  It would be nice if you knew ahead of time.

Here’s a simple technique I use to get an idea about the total number of responses I can expect from a survey invite.

    1. Perform a Soft-Launch (aka Pre-Test): It’s always a good idea to pre-test your survey on actual recipients to work out the kinks in your questions–use this opportunity to estimate the response rate. Instead of sending out invitations to your entire list, send the survey to a portion of the list. This will vary depending on how large your list is but I find somewhere between 100-500 is usually sufficient. You’ll obviously need to adjust that down if your total list is smaller than 100.
    2. Calculate the response rate from the soft-launch: After a reasonable amount of time has passed, divide the number that responded to your survey by the total you invited. This is your sample response rate.  For example, if 20 out of 100 responded you have a 20% sample response rate.
    3. Compute a confidence interval around the response rate: The estimate of the response rate you obtained from the last step will fluctuate a lot do to random chance and especially when the sample is small.  To overcome this problem we compute a binomial confidence interval.  For example, the 80% confidence interval around the 20 out of 100 response rate is 15.5% to 25.6%.  That means we can be 80% confident the response rate for the entire survey sample will be between 15.5% and 25.6%. Or put another way, we can be 90% confident the response rate is above 15.5%. These computations are included in the Survey Sample Size package.

      Technical Note: I often use an 80% level of confidence (for a 2-sided confidence interval) as this is the same as 90% confidence for a 1-sided confidence interval.  I’m most interested in the lower-bound of the response rate estimate which is a 1-sided research question. You can use a different level of confidence depending on how sure you need to be of your results. A 2-sided 95% confidence interval for the same 20 out of 100 would mean we could be 97.5% confident the response rate is above 13.3%.

 

  • Multiply the lower-bound estimate of the response-rate by the total number of invites: If we have 950 total people we’re interested in sending the survey to we can expect to have at least 147 responses at the end of the survey period (.155*950).  Round-down any remainder to keep the estimate slightly more conservative. It is also unlikely that I’ll see more than 243 responses (the upper boundary of the confidence interval).

    If you need to know how many invites you need to send out to maintain a certain margin of error in your calculations (e.g. 5%) see the Survey Sample Size package.

 

Example 1: Low Response Rate Large List

For example, in a recent survey I performed a soft-launch to 100 users out of the 2768 in the total list (3.5% of the total). After 2 days I received 7 responses for an estimated response rate of 7%. I then computed the 80% confidence interval as 4.3% to 11.1%.

This means I can be 90% confident the response rate of the entire list will be at least 4.3% after 2 days. I have a total list of 2768, minus the 100 I used for a remaining list of 2668. I can expect to receive at least 114 responses after 2 days (.043*2668) plus the 7 I already received for 121 in total. It’s unlikely I’ll receive more than 295 responses in that time period.

I sent the email out to the 2668 people. After 2 days I received 157 responses—well predicted by the confidence interval.

Example 2: High Response Rate Small List

In another example I had 665 total contacts to invite to a survey. I sent a pre-test out to 186 contacts (28% of my list) and received 47 responses after 1 week. The 80% confidence interval around the response rate was 21.4% to 29.6%. I can be 90% confident of receiving at least 102 more responses after sending the email to the remaining 479 people (.214*479).  In total I should have at least 149 responses (102+47) at the end of the survey.

I sent the email out to the remaining 479 people and after 1 week I received 130 responses–again well predicted by the 90% confidence interval.  I left the survey open for an additional week (2 weeks total) and got an additional 20 responses for a total of 150. Adding this to the pre-test responses of 47 I had 197 total responses to work with.

You can also use the same method to estimate your drop-out rate. This can give you a good idea early on if you’ve got too many questions in your survey. Just compute the confidence interval around the total completes or the proportion who start and who successfully complete.

Risks

While the math will work well you have to be sure your pre-test will mimic the conditions of the larger survey. Different days of the week or month can substantially affect the response rate. You’ll also want to be sure your pretest was a random selection of the larger list and not systematically biased in some way.

Finally if you have your pre-test collecting data for say 5 days but you’ll keep your larger test open for 15 days then you may be underestimating the response rate.  In the second example I kept my survey open twice as long as the pre-test, sent out a reminder email and got more responses. It was barely within the predicted confidence interval–but that’s OK, my goal was to get as many responses as possible not necessarily keep my predictions accurate (although I really like doing that too!).

0
    0
    Your Cart
    Your cart is emptyReturn to Shop
    Scroll to Top