Sharing stuff I've learned, and things I've thought about...


If your boss asks you to gin up a survey, or if you or your organization is thinking of doing a survey, this post just might help – anyway, I  hope it does…


I can’t remember the last time I was presented with a well-designed survey. I’m pretty sure that most surveys are designed by someone who has no experience with surveys, and have probably had the task delegated to them by someone that has provided some pretty vague direction.

Designing a good survey isn’t difficult from a technical standpoint, but it can be a bit labor-intensive. There is a lot of thinking that needs to be done before you log in to Survey Monkey and start entering the survey questions. The principles of good survey design are pretty basic, and reflect ideas that are usually conveyed in any formal management or supervisory education, and doubly so for people who have any formal training in behavioral science research.

The advent of “free” online tools such as Survey Monkey have greatly increased the incidence of online surveys, and greatly reduced their quality. Many businesses and organizations do surveys with little or no preparation or thought because it is so easy to do so, and can give managers the illusion that they are being proactive and innovative when in fact they are collecting random data points that is being disguised as information.

The following is a good outline for the process of designing an effective survey.

Define the survey goals

What do you hope to learn from the survey? I’m talking about the “Big Questions” here, not the specific questions that will appear on the survey. What do you hope to know about the surveyed population after the survey is completed and analyzed that you don’t know now?

This can often be initially defined in one or more brainstorming sessions, but it’s important to make it clear that this is literally a “fishing” expedition, and that you are not yet at the point of designing the survey. The reason for this important caveat is that once you have collected ideas about what kinds of things people in the room want to know, you must take a big step back and look at each item of interest and determine which of them might result in actionable information. It’s really important to separate the wheat from the chaff here – to distinguish information about the surveyed population that simply satisfies curiosity from information that can influence your organization’s future actions. This is not to say that things learned that are not actionable are not of value – if we’re surveying our customers or clients, it’s often important to arrive at a deeper understanding of who they are and how they think. However, a shorter more focused survey is always better than a longer and more meandering survey, and it may be better to conduct multiple surveys rather than try to cram everything into one; longer surveys have a much higher abandonment rate which will reduce the size of the surveyed population and skew the results.

Another dimension to examine is the time frame for the actionability of the question. Some learnings may be actionable in the short term (like attitudes regarding customer service practices) while others may have a much more distant horizon (new product development). This dimension too should suggest splitting the surveys so that each serves a specific and coherent set of goals.

Formulate the questions

Many of the worst surveys I’ve seen have obviously been designed around one of the five basic types of response format:

Yes/No or True/False
Unordered “multiple choice”
Ordered multiple choice or “ranking”
Continuous scale (sometimes mistakenly referred to as a “Likert” scale)
Open-ended question

This is driven in part by a tendency to look ahead to the way the results of the survey are to be presented. While an important part of the survey results analysis, this should not dictate the design of the survey.

Instead, the goals of the survey (now that they are clear and coherent) should drive formulation of a series of questions that will elicit responses that will hopefully achieve the goal of the survey. This too can be done in a brainstorming session, as you can never have too many candidate questions. The session should then compare, contrast, evaluate and rank the questions that meet a high standard of clarity and specificity. Particular attention should be paid to questions that may be duplicates – are they asking the same question but simply worded differently, or are there really multiple questions here if each is re-worded to be more specific?

Determine appropriate response formats

Only when the list of questions has been winnowed down to what is believed to be those that will be most effective are you ready to make decisions about the question formats.

Each question should be evaluated as to which response format will accurately capture the responses. For example, care should be taken to ensure that a Yes/No format isn’t associated with a question that can elicit a response of “maybe” or “sometimes” or “it depends”. By far, the yes/no format (or it’s close cousins agree/disagree or true/false) is the one that I see most over-used. Yes, it is very easy to analyze the results, but despite the way some people like to think, we do not live in a black-and-white world. Shades of gray always predominate, so it’s critically important to verify that if you pose a yes/no question, that everyone responding will be able to quickly answer the question one way or the other. I know that I will quickly abandon a survey I’m taking when I encounter the 2nd or 3rd yes/no question that I feel is forcing me to make a response that is not accurate.

If the questions as stated can all be answered by a single format, then you can proceed with setting up the survey. However, it’s more likely that questions will be phrased in a way that suggests different formats for different questions. While it is preferable if the survey minimizes the number of different response formats, and we would like to have the entire survey use a single response format, there is no reason that this has to be the case.  Each question should be examined to determine if it can be re-worded to fit a different response format without sacrificing the accuracy of the response. If the majority of the questions fit one response format as originally stated, then only the remainder need be examined to see if they can be modified to accommodate one of the more frequent response formats.

While a survey where all the questions use the same response format is easier to summarize into a nice graph of some kind, forcing the questions into an inappropriate format is going to compromise the quality of your results. You can still do a good job of analysis with any kind of response format, even multiple response formats within a single survey. Analysis is not that more difficult, and while the presentation of the analysis may be more complex, it can still be just as effective.

And now, the rest of the story…

If you follow the procedure I’ve outlined above, you’re going to conduct a survey that is far ahead of 90% of the surveys currently being run. This is not the whole story, however, because you need to think carefully about the other details of running an effective survey. How are you going to collect it? Web-based surveys can be easy to create and distribute and collect, and can be very cost-effective. In some cases a web survey is the only practical collection method, but is it appropriate for the audience and the subject matter? If not, it could be so inappropriate as to negate the value of your carefully crafted survey questions. How are your respondents going to be selected? If the respondents volunteer, what kind of self-selection bias are you introducing? Can you ask for additional (e.g. demographic) respondent information to correct for a self-selection bias, or otherwise ensure that the respondents are representative of your target population? If your survey is being done in person, will any bias be introduced by the person administering the survey?

These are just some of the things that you need to think about before you conduct a survey, but doing so gets you closer to providing information that you can actually use to make decisions and plans for your organization.