Comparative Scaling Techniques Simplified

aytm logo icon
Posted Jan 25, 2018

You want to use comparative scales when you have two or more objects (stimuli) that you want respondents to compare at the same time. The major advantage of comparative scales for you as a researcher is that small differences between your test stimuli can be detected. From a respondent perspective, comparative scales are easily understood. Respondents are shown the stimuli simultaneously, allowing them to compare the objects starting from the same reference points, which helps reduce carry over effects (such as order bias) from one judgement/response to another. As respondents compare the objects, they are forced to make a choice between them. The resulting data has only ordinal (rank order) properties. It may be helpful to first brush up on the Fundamentals of Market Research Scaling Techniques to ensure you have a fresh understanding of the types of scales used in Market Research.

There are a few types of comparative scales, so how do you know which one to choose? Let’s begin by discussing the primary techniques briefly, along with the advantages and disadvantages of each.

Paired Comparison

A technique in which the respondent is presented only two stimuli at the same time, paired comparison requires respondents to select one of the stimuli according to some measure, like purchase intent. Paired comparison is the most widely used comparative scaling technique.

Several analytical methods can be used with paired comparison data, such as calculating the percentage of respondents who preferred one stimulus to another. Using the assumption of transitivity, you can perform rank order by assuming that if brand X is preferred to brand Y, and brand Y is preferred to brand Z, then brand X is also preferred to brand Z and they rank in X, Y, Z order.

Rank Order

Rank order requires respondents to rank three or more stimuli objects according to some measure, like overall brand preference among several brands or specific brand attributes. Rank order data is also frequently obtained in conjoint analysis because it forces respondents to discriminate among stimuli. Ranking is a simple question for respondents to understand, but you do not want to require them to rank too many objects or you risk fatigue. Compared to paired comparison, rank order scaling more closely mimics the real shopping environment.

Constant Sum

In constant sum scaling, respondents allocate a fixed number of units, such as dollars or points, among a set of stimuli according to some measure. For example, you may want to ask respondents to allocate 100 points among four different package designs in a way that reflects their likelihood to purchase. The more points allocated to a package design, the more likely they are to purchase a product with that design. If a respondent would never purchase a product with a particular design, he or she would assign it zero points.

Unlike paired comparison and rank order scales, constant sum scales have an absolute zero – no points could be allocated to a stimulus object, 50 points is twice as many as 25 points, and the difference between five and 10 points is the same as the difference between 30 and 35 points. Therefore, constant sum scaling is sometimes treated as ratio data (which allows you to compute advanced statistical analyses). However, it is important to remember you are only asking respondents about a limited number of objects, so the resulting data cannot be generalized to stimuli not included in the study.

The key benefit of the constant sum scale is that it allows for fine discrimination among objects and does not take a lot of time for respondents. You will still want to limit the number of stimuli to help prevent respondent fatigue. From an analysis perspective, you’ll not only know which stimulus was preferred, but also know how much it was preferred to other stimuli.

The Takeaway

Comparative scaling techniques are simple for respondents to understand and complete with minimal effort. When used appropriately, they also add variety to your survey –useful alternatives to the popular radio and checkbox questions.

Use paired comparison scaling (side-by-side comparison question type) when you want respondents to compare two objects simultaneously, and you want to compute the percentage of respondents who selected one object over the other.

Use rank order scaling (reorder-ranking question type) when you want respondents to rank three to seven stimuli according to some measure – like overall preference – and want to know the order stimuli were ranked  in, but don’t necessarily need to know how much more logo B was preferred to logos C and D. For example, it is enough for you to know that respondents ranked them in order of highest preference: logos B, C, D.

Use constant sum scaling (distribution question type) when you want to understand the extent of differences between stimuli. So, you not only want to know that logo B was preferred to logos C and D, but you also want to know how much more it was preferred, if at all. For example, out of 100 total points, respondents, on average, allocated 40 points to logo B, 40 to logo C, and 20 to logo D. It can be said that, among the stimuli tested, respondents preferred logos B and C equally, and twice as much as logo D. This provides you with additional data you would not have collected using rank order scaling. If rank order scaling was used, respondents would have been forced to place either logo B or C at the top, when in reality, both were preferred equally. You’ll need to decide how much flexibility (in terms of discriminating among answer choices) you want to provide respondents, which will depend on what will be most useful to your analysis and decision making.