Rank-order Data: Which Question Type Should You Use?

aytm logo icon
Posted Dec 07, 2020

As with many things in life, there are multiple ways you can get your target customers to prioritize items.  On the aytm platform, there are three methods you could use to see items ranked: the Reorder question type, the Side-by-Side question type, and the MaxDiff question type.  The question arises, if there are three ways of accomplishing the same thing, which one should be used?

In a recent survey, we asked respondents to rank order seven attributes of a product, but we split the sample into three groups where one used the Reorder exercise, one used the Side-by-Side, and the third a MaxDiff exercise.  In this article, we will discuss the differences between each method, and most importantly, seek to understand if there are any differences in the results that each produces.  But to start, let’s discuss the methods and some of the trade-offs of each.

Reorder Question

The Reorder question is the most standard ranking exercise.  Respondents are presented with a list of items, features, names, or whatever they’re supposed to rank, and then they drag and drop each one into its sorted position of importance, desire, etc.  

Side-by-Side Question

The Side-by-Side question produces the same results as the Reorder question, which is a rank-order of items by each respondent.  However, the Side-by-Side presents the options as pairs where respondents will click the one that is the better of the two. It will dynamically present pairs of options until enough data has been collected from the respondent to infer the respondent’s rank ordering.

MaxDiff Question

The MaxDiff question type is actually far more than just a ranking question. The MaxDiff exercise, like the Side-by-Side, also splits up the items into sets of four or five options at a time and asks respondents to merely indicate which is the best and worst of the set.  Respondents complete multiple tasks of this sort although they do not need to see every possible combination of options; the analysis will fill in the gaps by applying learnings from the whole sample of respondents to infer each individual respondent’s preferences for all items.  

What makes MaxDiff more than just a ranking question is that it also provides robust information about magnitude of preference.  Whereas a Reorder exercise can tell you which is 1st, 2nd, and so on, it cannot provide information about how close those positions are.  The 1st and 2nd choices might be really tied in a respondents mind, or the one in first place could be a clear winner and the preference for the 2nd is much lower.  With MaxDiff, these magnitudes can be determined.

Design Differences in Methods

Survey Length

On the aytm platform the Reorder task uses only one screen and helps to keep the length of interview shorter.  For example, even with the seven attributes we tested, we count this as adding one question to the survey.  By contrast, we count the Side-by-Side exercise (with seven items) as effectively adding two questions and the MaxDiff adds six questions, although the number of screens respondents see is more than this for both methods.  From our survey specifically, respondents completed the reorder exercise in about 38 seconds on average -- less than half the time it took to complete the Side-by-Side (1:28) and less than one-third the time it took to complete the MaxDiff exercise (2:31)

The trade-off of course was that the MaxDiff exercise, in this case, took a full minute longer for respondents to complete as compared to the Side-by-Side.

Number of items

As mentioned, the Side-by-Side and MaxDiff questions take more time when there are seven items, but they do allow for reasonably longer lists. In fact, our typical recommendation for Reorder type questions is that seven items is starting to be too much for respondents to rationalize at one time; the platform’s limit is ten.  Side-by-Side and MaxDiff are less cognitively straining for respondents and because of the advanced analysis used to draw inferences based on a sample of data longer lists of items can be used with these tools.

Other Considerations

At aytm, we’ve recognized that on a typical survey, about half of the respondents entering the survey are using a mobile device. As a result, we have always ensured that the surveys respondents see are designed with a “mobile first” philosophy, and do not create biased responses for those on a mobile device.  With the different ways respondents complete these tasks (e.g. drag-and-drop vs. clicking), we have also included device type (just mobile phone versus computer) in the analysis as both a control and verification that responses are not biased by device type.

Methods

Because mobile phones and computers accounted for 90% of the device usage for this survey, the analysis focuses only on those two device types, removing tablets and other devices.  This allowed us to control for greater variation in device type, such as tablets having larger screens than phones. The effective sample size is thus 901 respondents.

The three methods were embedded in a larger study on different types of technology and respondents’ buying preferences.  This portion asked respondents to indicate their preferences for different features when shopping for cell phones.  Respondents were randomly assigned to indicate their preferences with either MaxDiff, Side-by-Side, or Reorder tools.

Analysis and Results

For the purposes of illustration and better interpretation of the results, we used the Score, or the reverse of the rank order, where “7” represents the best feature and “1” represents the worst for the Reorder and Side-by-Side responses. For the MaxDiff, the aytm platform used a hierarchical Bayesian approach to output utility scores for each individual respondent. We recoded these values from that analysis to reflect the reversed rank order to be consistent with the other two exercises.

When comparing the results from each, we noticed immediately that there appears to be a difference in usage.  Notice in the figure that MaxDiff and Side-by-Side were largely consistent with one another, although the bottom two options were flipped.  The Reorder exercise on the other hand was consistent with the other exercises for the top three out of seven features, but there was a lot more switching between the bottom four. Further, the differences in average rank were greater between the top and bottom in the Reorder exercise. If we assume that the results from the MaxDiff and Side-by-Side are better, then that would imply that Reorder is only good for identifying the top two or three options.

Results of the Statistical Analysis

To further explore the differences statistically, a mixed-model ANOVA was conducted to test a 7x2x3 design of feature by device by tool on the preference score, as described above, with .  The feature variable was set as a within-in subjects variable with the following categories: price, battery life, operating system, amount of data storage, size of the screen, quality of the camera, and privacy features.  Both device (mobile vs. pc) and tool (MaxDiff vs. Side-by-side vs. Reorder) were set as between-subjects variables.

Given the objective of these tasks is to differentiate between features and force them to have different scores, there was a significant main effect of feature, F(5.7,5069.3) = 118.59, p < .001, as expected.  The primary interest, however, is if the effect of feature on preference changes according to device and/or tool.  

The tool x feature interaction, however, was statistically significant, F(11.3,5069.3) = 7.85, p < .001, indicating that feature preference changes according to which tool is being used.  Post-hoc pairwise comparison indicated that across all features, there was no significant difference, at the ⍺ = .05, in preference between the MaxDiff and Side-by-Side tools. For all but one feature (“screen size”), preferences measured by Reorder were significantly different from at least one of the other tools.and were significantly different from both for three features.

The device x feature interaction was not significant, F(5.7,5069.3) = 1.76, p = .11, validating the assumption that results are consistent across mobile phone and pc devices.  Furthermore, the device x feature x tool interaction was also not significant F(11.3,5069.3) = 0.89, p = .556, indicating that the above interaction between tool and feature exists across both mobile phone and desktop devices.

Conclusion

With three different tools available to measure and prioritize preferences, it is important to recognize their individual pros and cons.  There are well known advantages and disadvantages among MaxDiff, Side-by-Side and Reorder tools based on design considerations, like survey length and number of items to rank.  Beyond this it is important to establish if these tools are interchangeable for providing meaningful results.  This study looked at how consistent these tools were across device type and with each other.  Results showed that device type has no impact - all three tools produce similar results regardless of the device used.  On the other hand, results showed that the MaxDiff and Side-by-Side showed consistent results, whereas the Reorder tool often produced different results from one or both of these.

With this difference in mind, it is recommended that a MaxDiff or Side-by-Side tool should be preferred if prioritization of items is a key objective of a study for the most reliable results across the full list of items.  The Reorder tool, however, can still provide its design convenience when used for a pulse measure of top rated items.  As in all best practices, Identifying the main purpose of the ranking exercise will help determine which trade-offs should be made to obtain the best quality data.

This article was originally published by the Insights Association.

Featured Stories

New posts in your inbox