MaxDiff is an approach for obtaining preference/importance for multiple attributes (brand preferences, brand images, product features, advertising claims, etc.). Relative to a standard ranking question, MaxDiff allows for a better understanding of the overall preference order of a set of items as well as the distance between them.
When you have between 7 and 200 attributes to test, aytm’s Advanced MaxDiff can help you determine relative preferences with ease. Next, we’ll take a look at how to use MaxDiff in your survey!
When to use Advanced MaxDiff
The Advanced MaxDiff test is a great way to compare many alternatives without overwhelming respondents by asking them to read and consider all items at once. It takes a list of your items to be compared and shows them in a balanced order to each respondent 3, 4, or 5 at a time.
Imagine you’re working on launching a new product, you’ll likely have a list of features, claims, and descriptions. Whether you’re using them for brand packaging or marketing you want to find out what resonates with your consumer, what’s important to them, and what they expect of this product.
An Advanced MaxDiff experiment allows you to compile all of the possible features, claims, or descriptions your team has imagined and find the hierarchy and distance between each. Ultimately allowing you to make smarter, data-driven decisions on how to develop and market your new product.
Adding Advanced MaxDiff to Your Survey
Incorporating an Advanced MaxDiff experiment into your survey is a simple drag and drop process.
Locate the Advanced MaxDiff icon and click to drag and drop anywhere in your survey. Once added to your survey, you’ll only need to input the list of attributes you want to test and any directions you want to provide in the question text area.
We recommend including a short instruction for it, such as:
- Reorder – “Please rank the following items in the order of your preference: from most preferred on top to least preferred on the bottom.”
- Best/Worst – “Please select your best choice (thumbs up) and your worst choice (thumbs down).”
- Image Grid – “Please select your most preferred then least preferred image below”
Once you have your list of attributes add them to the experiment.
*Advanced aytm programming tip*
If your list of attributes is compiled somewhere like a document, simply copy the list and paste it into the first attribute space, the system will fill in the list for you. The number of attributes you have to test will determine the total questions respondents will be asked for this experiment and how many total completes we recommend for statistical stability. As you add attributes the system will update the total questions in the experiment under the last attribute in your list.
Advanced MaxDiff: Express vs HB (Hierarchical Bayesian) vs HB+ TURF
We offer three options when using our Advanced MaxDiff research test, Express, Hierarchical Bayesian (HB), and HB+ TURF.
When you add an Advanced MaxDiff experiment to your survey you will have an option to choose Express mode as shown in the example below. As you fill in your attributes the system will update the line at the bottom of the experiment with how many questions each respondent will see for this exercise.
This method is focused on collecting general aggregate information, without the intention to obtain individual-level estimates. In a typical setting, respondents would see 3-5 screens.
Express uses the aggregated Logit Regression model for analysis. This mode is suitable when interested in a total-level view only.
Hierarchical Bayesian or Advanced MaxDiff HB
When you add an Advanced MaxDiff experiment to your survey click the dropdown outlined below to switch between modes including HB and HB+TURF. As you fill in your attributes, the system will update the line at the bottom of the experiment with how many questions each respondent will see for this exercise.
This method is focused on collecting high-resolution individual-level data to be analyzed by the Hierarchical Bayesian model. In a typical setting, respondents would see 10-20 screens.
The core analysis of respondents’ preferences is performed with the help of the Hierarchical Bayesian Multinomial Logit model. The Bayesian model is estimated with a Hybrid Gibbs Sampler with a random Metropolis step MCMC. The number of burn-in iterations is determined automatically when there’s enough evidence for convergence.
The model considers the properties of other items presented in a task when the respondent makes a choice. The best/worst probabilities correspond to the logit transformation of the linear combination of utility scores of the packages in the task.
Respondents are analyzed individually, with their preference scores being a realization of pooled “average” opinion which follows a Normal distribution, at the same time reflecting their individual preferences. As a result, raw Logit coefficients are available for every respondent. A good closer for this might be: This mode is ideal when interested in examining MaxDiff results among subgroups.
HB + TURF
The best of both worlds, with HB+ TURF, you get all the benefits of an HB model with added TURF analysis. TURF stands for Total Unduplicated Reach and Frequency Analysis. The main objective is to provide detailed statistics on how having multiple items “enabled” at the same time affects their total overall appeal.
The appeal in many cases is used as a proxy for the market share, thus enabling various conclusions to be drawn on how certain sets of items/features/products/descriptions would perform in a real market environment.
Launching an Advanced MaxDiff Experiment
Respondents taking the survey will interact with a subset of the total attributes programmed in a survey. For rank (reorder) and Best/Worst modes you can choose to show respondents 3,4 or 5 attributes per screen. The Image Grid mode will only ever show respondents 4 images at a time.
Based on each respondent’s choices across the various trials, utility or importance scores are derived for each attribute.
This experiment is an improvement on the classic symmetrical tables allowing respondents to focus on ranking the winner and loser with minimal effort. This helps to keep respondents engaged, reducing the dropout rate, and ensuring the highest quality data for you.
While you’re building your survey, keep in mind how many questions the Advanced MaxDiff will require. For example, if you have seven attributes, using Express will add three questions to your survey total, while HB will add six. So, if you programmed 10 questions including a MaxDiff, it would be 13 total questions and 16 total questions, respectively.
How to Analyze Your MaxDiff Results
Now that you’ve successfully programmed and launched your Advanced MaxDiff experiment, you can view the results. When you navigate to your stats report page and scroll down to your Advanced MaxDiff experiment, you’ll first see a graphic and the menu to choose which analysis option you want to review.
The differences between Express and HB are how they’re analyzed on the back end, which is explained in more detail above. To view the results, we have the same three options whether you use Express, HB or HB + TURF as shown in the examples below.
- Preference Likelihood (4/screen) represents the likelihood that an item would be preferred over three other randomly selected items in the set and is appropriate when the MaxDiff shows four items per exposure.
- Average-based PL (50% baseline) represents the Preference likelihood [PL] that an item would be preferred over one other randomly selected item in the set. A score above 50% indicates that an item is a better-than-average performer.
- Utility Scores are the raw regression coefficients estimated at the aggregate level. They are zero-centered so that 0 represents the average performance. The more positive an item’s utility, the more it is preferred by respondents and the more negative an item’s utility, the less it is preferred.
MaxDiff HB exports individual utility scores and HB+TURF provides a TURF simulator in addition to the utility score export.
HB (Hierarchical Bayesian)
When applying filters to a survey, Advanced MaxDiff HB questions will show aggregate statistics for the current subset of respondents.
HB + TURF
Learn more about the TURF simulator and how to export the results here.
The aytm Difference
Aytm’s Advanced MaxDiff test is more accurate than the classical MaxDiff test because it uses an adaptive real-time randomization algorithm that works while the survey is being fulfilled to provide the greatest possible efficiency of item distribution per quads and per respondent, rather than relying on a predetermined map of the entire test. We do all of the heavy lifting seamlessly in the background.
Want to incorporate Advanced MaxDiff in your next survey? Reach out to firstname.lastname@example.org for help.