Although quant in nature, online surveys also enable researchers to add a little qualitative flair to the overall findings by using unstructured, open-ended questions. The collected feedback can provide enriching, supportive insight akin to a focus group sound-bite. For surveys of smaller sample sizes, the verbatims may even help “break a tie” on content-related analytical conundrums, thus aiding business decision making. However, since there is no interviewer present, open-ended questions must be written thoughtfully and used selectively. On the analysis end, researchers must be assiduous and unbiased in their coding and interpretation.
In online surveys, verbatims are collected by programming in open-ended or free response question types. Verbatims are sometimes gathered from an “Other” answer option whereupon selecting, respondents are asked to specify an unlisted answer in their own words. Open-ended questions can provide a lot of insight if used properly. Since they solicit unbiased information compared to structured questions, they’re excellent to use when first introducing a topic to learn respondents’ initial thoughts. To use in a concept test, for example, you would present the product or service to the respondent (usually text + image/video) and first ask a purchase interest question using a Likert scale – a structured question. But before presenting any specific attributes (biasing material) to the respondent, you ask for their unbiased feedback via unstructured questions, usually in the form of likes and dislikes. Then, one approach in analysis could be to determine how often the key attributes according to you, the researcher, and according to the respondents’ actual attribute ratings matched with what they typed earlier in the survey (e.g., if “This product looks safe for my child” is one of your key attributes, how many respondents mentioned something relating to safety in their open-ended responses before seeing the list of attributes? Was it mentioned positively or negatively? Does this align with their ratings?). Open-ended questions are also very useful in exploratory research when you are trying to learn more about a topic generally and need specific feedback from your sample population. It is important to recognize a few disadvantages of unstructured questions, so you can use them sparingly and collect richer insights into the process. The principal disadvantage is that they are costly and time-consuming to code. Coding verbatims requires a researcher to summarize responses in a format that is useful for data analysis. So, while it may seem like asking 10 open-ended questions in your survey is critical for your business decision making, having to sift through and organize each respondent’s thoughts may not result in the enlightenment you expected. Additionally, you must consider the respondent’s burden to reply to 10 unstructured questions. It is much easier and faster to simply select from a list of provided answer choices; instead, open-ended questions require the respondent to consider and type out a response that makes sense. Implicitly, more weight or value is given to respondents who type out more thoughtful and lengthy responses. Furthermore, you’ll notice fewer articulate responses in online surveys since it can be more effort to type a response on a keyboard than to speak to an interviewer if one were present. There also may be more spelling errors and misinterpretation resulting from the respondents’ use of autocorrect, voice-to-text, etc.
All survey questions and answers need to be assigned a code, usually a number. In the example below, the top row consists of a category for each demographic trait question, survey question, plus respondent ID and date/time survey was taken. In the gender question (column C), two codes were used, ‘1’ and ‘2’, where ‘1’ represents ‘Female’ and ‘2’ represents ‘Male’. In the screener question PQ1 (column K), there were 7 different answer choices, and each number code represents which answer choice the respondent in the corresponding row selected. In a yes/no question, like Q1 (column L), the codes ‘1’ and ‘0’ represent ‘yes’ and ‘no’. These codes were automatically created and applied through the survey platform. The coding of unstructured questions, on the other hand, is much more complicated. Codes must be manually developed for every answer provided. Sometimes, based on past projects or other experience, researchers can precode; this is one method to help overcome the key disadvantage in a verbatim analysis. In precoding, responses that are anticipated are recorded in a multiple-choice format and responses that match the answer category are grouped there and coded accordingly. This would be useful when there is a limited number of optional responses possible, so they are more easily predicted. Typically, though, coding must wait until after fieldwork has been completed, so be sure to keep that in mind when considering project timing. During coding, researches will group a number of similar responses and assign them a category code. Then, all other similar responses will be applied to that category code and grouped together. Consider these helpful tips when developing category codes for open-ended responses:
- Category codes should be mutually exclusive – i.e., each response can fit into only one category with no overlap.
- Category codes should be collectively exhaustive – i.e., every response can fit into one of the categories and isn’t left out.
- You may need to include an “Other” or “NA” category to ensure you’re using collectively exhaustive categories, but keep in mind only 10% or less of the total sample should fit here.
- Category codes should be assigned to critical issues, even if no respondent mentions them, which can be telling on its own.
- Use codes that retain as much detail as possible (be as specific as you can).
The data collected from open-ended questions can provide enriching insights that add personality to your reporting. Similar to qualitative insights gathered in focus groups, verbatims can support key findings and make the data feel more human and real. Proceed with caution, however, as coding and analysis are timely and costly. Use free response questions only when the data will help support business decision making – it should be “need to have”, not “nice to have” information. Keep in mind that online surveys often result in a higher rate of shorter and less detailed responses due to various factors. Post-fielding, researchers must code the verbatims into distinct and appropriate categories, grouping similar responses accordingly. Category codes should be mutually exclusive, collectively exhaustive, assigned to critical issues regardless of mention, and as specific as possible.