The Key to Running Effective Surveys: Proper Design

January 18, 2010

I recently gave a teleconference on “The Key to Effective Surveys.” There was some great discussion at the end, so I thought I would share these questions in the hopes that they’ll eliminate confusion next time you’re designing your own survey.

Q1. Are customers less likely to respond to an online survey if they know we’ll know who they are by their email address?

A1: This can be an issue but in general it is not. Of course the more sensitive the topic the bigger issue it is, but 99% of the surveys I do have very little confidentiality issues.

 

Q2. How do you decide when to keep a survey blind or non-blind? Of course, a customer satisfaction survey would be by definition a non-blind survey. But what about some of the more attitudinal/behavioral, competitive and brand – equity related surveys?

A2: The only time to keep a survey blind is when you’re going to lose something by using a non-blind survey. A non-blind survey is when you include the name of the company doing the survey and blind is when the respondents don’t know who the survey is for.

For example, if you are Nike and you want to know how you compare to another company (you want unbiased, honest feedback about something you offer in comparison to something offered by another company). You are trying to leverage the relationship the respondents have with you and you want them to feel that connection.

 

Q3. A follow – up question to the blind vs. non-blind — how does this affect response rate? Does it help the response rate if not blind? By blind, I mean … not disclosing who is conducting/sponsoring the survey.

A3: The response rate has nothing to do with blind, only with non-blind. A non-blind survey helps the response rate because of the connection the target audience feels with the company.

 

Q4. Is it better to have few open ended questions?

A4: Survey design is driven by the objectives of the survey. The challenge with open-ended questions is that you end up with many varied/different responses.

They are useful if you are looking for general information and if you want many broad ranging ideas about a topic. Surveys are good for getting general statements about and a representation of a target audience.

In general 60% of respondents don’t answer open ended questions and of the 40% that do, 90% of those give different answers. You get too much data and not enough information.

 

Q5. What is a good response rate (%) for web surveys sent via email?

A5: The response rate depends on the connection the target audience has with the company conducting the survey. Example: if the target audience is the customer, employee or the kid who goes to the school, the response rate will generally be considered good at 50%. Anything above 50% is very good.

Q6. Can you give examples of effective wording to help the audience be more definitive in their decision response instead of answering middle of the road to “play it safe”?

A6: There are different schools of thought on this. There is no wording that can force a respondent to be more definitive. They either are or they are not and the survey’s purpose, in part, is to measure that. When you remove a neutral response you are muddling the data; people can be and are sometimes neutral.

 

Q7. Any feelings on time of day to send online survey and day of the week?

A7: This depends on if you are sending a B2B or B2C survey. Email research shows that for B2B Tuesday through Thursday is most effective. A Friday emails can get overlooked and if it is missed it then becomes part of Monday’s email. Time of day is not critical.

B2C emails are most effective on nights and weekends because most people work and will not use work time to answer a non-work related survey.

 

Q8. When we do focus groups, we like to end with a written survey to help gather the same information in a quantitative way. Do you think that this is a good thing or a bad thing given that they have already been asked all of the questions in the group?

A8: It is a good thing, particularly in a focus group it helps to have it. Group think takes place and the loudest voices are often only the ones heard as they tend to lead, and you don’t really get what everyone thinks. We have provided a written survey ahead of the focus group. The participants had notes, had seen the questions and had a chance to think about them.

Q9. What is the best survey tool on the market?

A9: Alchemer because of the service and the features they provide.

 

Q10. Our survey follows an event; we’ve moved to online surveys. What is the best time to send a survey after an event so people have time to digest the event and can thoughtfully respond?

A10: The sooner the better. The next day it would be great to have an email waiting in their in-box because people will forget the details over time. However, there are exceptions. For example a 3-day seminar versus a couple hour event.

 

Q11. What is the best survey method/design to capture Price Sensitivity amongst consumers?
IE – How do we get accurate information in regards to how much a consumer is willing to pay (we see examples of consumers saying they will spend but in reality they don’t).

A11: We have done conjoint analysis (experimental design for surveys) to determine how participants value different features and prices. The respondent is provided with various scenarios in statements with significant responses set to analyze different prices and features.

For example: A product or service is shown/described and faced with what is in front of them, they give their opinion. Then certain attributes are changed and they are asked to give their preference on this. You force them to make trade-offs in the survey.

  • Get started with Alchemer today.
    Try Alchemer Start my free trial
  • See How Easy Alchemer Is to Use
    See Help Docs
  • Start making smarter decisions

    Start a free trial