Market research aims to extract actionable insights that leaders can use to inform their decision-making processes. But these insights need to be digestible, accurate, and reliable, and the raw data from your surveys and online questionnaires don’t always fit into these categories. That’s why it’s so important to process and statistically adjust your findings to eliminate bias and better represent your target market—because failure to do so could result in inaccurate or misleading reports. So today, let’s go over three simple techniques that you can use to improve the quality of your data and ensure that your reports are top-notch.
Weighting data
Let’s say you want to survey your existing consumers about new product features they’d like to see. The data from respondents who have used your product for years and are highly familiar with it will be far more relevant and valuable than brand new users.
Weighting data is a way of adjusting the data to reflect this difference in value across your population. Our example here gives a weight value of 3.0 to heavy users, 2.0 to occasional users, and 1.0 to light or non-users. This weight value would multiply the value of their responses, thus making their responses more prevalent and obvious in the resulting data.
The key thought behind weighing data is to make your data incorporate the relative importance of each respondent based on their specific characteristics. This helps you ensure that your data is more reflective of the target population you are trying to reach. In other words, if your main goal is to add a new feature to benefit your current users, their opinion must come through with a sharper focus in your reports.
How to weigh the values
Another area where weighting is useful is when you need to meet a target quota. Because of the smaller sample size, it is common that your data doesn’t always match up with the data confirmed by larger research organizations. For example, you might know that the share of 18-24-year-old pet owners is 12% of the total pet owner population. However, your sample data might only show 8%. With weighting, you can increase the statistical importance of 18-24-year-old pet owners to 12%. This makes your data more accurate to the real-world population.
How do you calculate the weighting factor? The process is simple. All you need to do is divide the population distribution by your sample distribution. The example above would mean 12/8 = a weighting factor of 1.5 for 18-24-year-old pet owners.
But what if your sampled distribution happens to be higher than the real-world distribution? In this case, you can use a weighting factor of less than one. Using the same simple division, you can reveal the fractional weighting value you can apply to your data. Not too complicated at all!
A note of caution about weighting data
But do be aware: Good sampling is intended to be self-weighted. The random composition of respondents has been carefully selected to reflect the target market as accurately as possible. Weighting can throw off this self-weighting feature. Additionally, weighting can inject bias into your data if misused, decreasing its reliability. So always be careful when weighing data, and note if any weighting procedure was used in your reports.
Variable respecification
Another helpful method for statistically adjusting your data is called variable respecification. This technique aims to make your data more consumable and accurate. For example, if you ask respondents about their intent to purchase on a 7-point scale, you have seven different response categories. A certain percentage of respondents will fall into these seven different categories.
The problem is that it can be difficult to articulate these seven categories in a digestible and actionable way. So, we can collapse these seven variables into just three categories: most likely to buy, neutral, and least likely to buy. This allows you to put all the data from respondents who scored over five into one category. You could also create new variables that combine many other variables. The result of this variable respecification is that the insights from your data are clearer and far more digestible.
Dummy variables
Another kind of variable respecification that’s worth mentioning here is dummy variables. These are also known as binary, dichotomous, instrumental, or qualitative variables and are generally used when category coding is not meaningful for statistical analysis. Using dummy variables can simplify your data sheet and make different calculations easier. Instead of putting the categories into your datasheet, you could use a dummy variable of either 1 (yes) or 0 (no) for that category.
Scale transformation
Likert scales are the most common type of rating scale. They are used to measure a person's agreement or disagreement with a statement. The statements are typically arranged in a continuum, from strongly agree to strongly disagree. Stapel scales are less common than Likert scales. They are used to measure a person's attitude towards something using a single adjective, such as "good" or "bad." The respondent is asked to rate the object or concept on a scale of -5 to +5, with -5 being the most negative and +5 being the most positive.
The Likert scale expresses levels of agreement and is commonly used for measuring respondents’ purchase interest. The Stapel scale is often used for gauging brand image. There are many more scales and measurements that you might use in your survey. The problem, however, is that you cannot readily compare data from each scale because you’re using different values and standards. This is where scale transformation comes in.
With scale transformation, you can correct the differences between your scales. The specific process is called standardization. Standardization will look slightly different for each required scale transformation, but by applying this process, you can easily compare variables measured with different scales. Another place where scale transformation is often necessary is in surveys of international markets, where other units of measurement may be used.
The takeaways
Sometimes, adjusting your data simply enables you to deliver more digestible insights to stakeholders. Other times, some kind of adjustment is necessary to get reliable and real-world accurate results. Here are some key takeaways:
- Weighting your data can make it more reflective of your target population.
- Variable respecification can help you make the variables in your raw data more consistent with your research objectives.
- Employ dummy variables if the coding used is not conducive to statistical analysis.
- Utilize scale transformation to compare data across all your scales.
But overall, remember that storytelling is the most compelling way to communicate your results to stakeholders. It will definitely take work finding the story in your raw data, but statistically adjusting your data can better reveal the story and help achieve the main goals of your market research.
At aytm, we aim to help companies better listen and understand their consumers to innovate and grow their organizations. But we also understand that research can be intimidating if it’s not part of your day-to-day. That’s exactly why we created this step-by-step guide to help you tackle your next survey project with confidence.
Editor's note: This article was originally published in 2018 but updated in 2023 for relevancy.