Advanced MaxDiff – Maximum Difference Research Test

The Advanced MaxDiff test is a great way to compare many alternatives without overwhelming respondents by asking them to read and consider all items at once. It takes a list of your items to be compared, and shows them in a balanced order to each respondent 4 at a time. Dependent on the data resolution requirements, the model will ask a number of ranking tasks for respondents to complete. When sufficient data are collected, we'll complete advanced statistical analysis on the back end and show you the hierarchy of your alternatives, as well as the distance between each item. If Hierarchical Bayesian (HB) analysis was specified in the settings, the model will also provide individual-level estimates, making it possible to draw inferences on sub-sections of the population without significant decrease in predictive power. Using this technology, you can find out which features/qualities of your product are most important to consumers, or rank a long list of slogans you are considering in the order of preference by your target audience. The Advanced MaxDiff test is quite sensitive to the number of completes, so we recommend ordering 400+ responses.

The Advanced MaxDiff test is more accurate than the classical MaxDiff test, because it uses an adaptive real-time randomization algorithm that works while the survey is being fulfilled to provide the greatest possible efficiency of item distribution per quads and per respondent, rather than relying on a predetermined map of the entire test.

It is also much more user-friendly than classical symmetrical tables used in the past.
Instead of asking respondents to find an item on the grid and put a checkbox in the correct column, we present them with 4 items at a time and ask them to rank order the items using our drag and drop interface. It conserves respondents' energy and focuses their attention on identifying the winner and loser for each quad with minimal effort.

The Advanced Maxdiff comes in two varieties: Express and HB. If Express option is active, the method focuses on collecting general aggregate information, without the intention to obtain individual-level estimates. In typical settings respondents would see 3-5 screens. While it will still be possible to see results for a subset of respondents, please keep in mind that the model only covers the subset of respondents, and does not take into account the rest of responses.

In HB mode, the method is focused on collecting high-resolution individual-level data to be analyzed by Hierarchical Bayesian model. In typical settings respondents would see 10-20 screens. Besides the additional option to extract individual logistic coefficients, it also becomes possible to look at analysis done on a subset of respondents, knowing that the results are more robust, as the model allows for individual-level estimation.

For more details on the methods, please refer to our knowledge base articles: Advanced MaxDiff Aggregate and Advanced MaxDiff HB.

While Max Diff appears as a single question in the survey editor, it'll take several questions from the respondent's point of view. Please pay attention to the note at the bottom of the question: "Experiment is using XQs" to keep an eye on the length of the survey. You may have to remove some other questions in order to run a larger Max Diff experiment.

The order in which compared items are presented in each quad for each respondent cannot be anchored, since it's governed by our special adaptive efficiency algorithm.

Question text will be repeated for each pair. We recommend including a short instruction for it, such as "Please rank the following items in the order of your preference: from the most preferred on top, to the least preferred on the bottom."

We offer breaking the list of items into quads (groups of 4 presented at a time). This is the only way you can run MaxDiff on the platform in DIY. If it's important to present the alternatives in groups of 3 or 5 at a time, please reach out and we'll set it up for you on the back end.

The main question and each compared item can have an image associated with it. These images can be expanded to the full width of the survey widget, or appear as a thumbnail and pop up on mouse rollover as a reference.

Each field has a limited number of characters: 120 for questions, 90 for tested items. AYTM Prime members can use up to 240 characters for the question field.

MaxDiff question can have a single skip logic destination. Use the rabbit icon to set it up.




See an example of results on a stats page
This question type doesn't work with the AYTM Personality Radar

 Price Optimization Model (POM) – VanKonan

The Price Optimization Model is also known as VanKonan. It can handle a wide range of business models, from consumer packaged goods to SaaS businesses and service companies. Like other AYTM advanced research tests, VanKonan is designed to be so easy to set up and interpret that it doesn’t require any research training whatsoever. All you need is an overall understanding of your business’s objectives and market realities to set it in motion. Once you enter your total addressable market, estimated retail price, and cost of goods, VanKonan fills in all the remaining pieces of the puzzle to build a comprehensive and flexible model. Your survey will gather enough information to connect all the dots, analyze the dependencies, and output a sophisticated prediction of acceptable price ranges and optimal price points for maximizing revenue, profit, and frequency of sales. Results will be presented with auto-generated executive summary findings and interactive charts, allowing clients to run unlimited ‘what if’ scenarios after fielding. VanKonan also measures the probability and frequency of purchase at each of the key price points.

The VanKonan model consists of four cascading modules you may select:

  • Run Price Sensitivity only (Van Westendorp)
  • Add the Maximum Revenue Price Optimization module
  • Add the Maximum Sales Frequency Price Optimization module
  • Add the Maximum Profit Price Optimization module


The fourth option gives you an all-inclusive VanKonan package. Each module adds a bit to the cost of the study, allowing you to do as much or as little as you need and as fits your budget.


Package description

Van Westendorp

Purchase probability

Purchase frequency

VanKonan is very sensitive to the number of survey completes, so we recommend ordering 650-1,000 responses; we can't launch it with fewer than 400 responses. We recommend using it as a second-stage price study, after you have figured out your target audience and total addressable market. You may want to follow up your VanKonan findings with an even more precise and expensive price methodology, such as qualitative research or a monadic price test.

If you ordered the smallest package which only includes price sensitivity, the Van Westendorp part will look like this when your survey is fulfilled.


If all VanKonan parts are ordered, you’ll see a summary chart with two curves, one for revenue and one for profit. You will also notice a few price points: one predicts the break even point, one predicts the price for attaining the highest revenue, and one predicts the price for attaining the highest profit. Two additional points without markets will be color coded and referenced below in the table as the “VW optimal price point” from the Van Westendorp model, and the price you gave us as an estimate. Below the chart you’ll find the key takeaway points, written by the AI we’ve developed.

The table will give you detailed information about each price point: the estimate of the buyers and corresponding percentage of your total addressable market, the average expected frequency of purchase in the selected timeframe, total sales volume, revenue, cost, as well as profit and loss. The profit cells for the best-yielding price points will be highlighted in green, while the cells resulting in loss will be highlighted in orange.




The Open Infographic button will produce an interactive and detailed infographic, describing the entire VanKonan model process step by step as well as the full story.

The order in which items appear for each respondent cannot be randomized and must follow a specific logical order.

VanKonan model can have a single skip logic destination, please use the rabbit icon to set it up.

See an example of results on a stats page
This question type doesn't work with the AYTM Personality Radar

 Competitive Topography

Competitive Topography is another turnkey solution created to explore a number of entities, such as brands, that can be rated on a list of attributes. For example, to get an idea of the fast food industry in the US, you may want to rate McDonald's, Wendy's, Burger King, and others in their competitive set on attributes like food quality, price, healthiness, speed of service, etc. This methodology will give you an important understanding of consumers' perceptions of each brand, perceived similarities between them, each brand's most differentiating attributes, and a visual comparison of ratings among the restaurants in aggregate as well as individually by each attribute.

To accomplish this deep analysis, researchers used to run complicated, expensive, and time consuming data modeling projects. At times these projects needed to be re-run if some of the parameters changed during the experiment. One of the best advantages of AYTM Competitive Topography test is that it's incredibly easy to set up. It can be added just like any of our other question types - simply by dragging and dropping the corresponding icon from the sidebar, or by adding it at the bottom of the survey. An existing question can also be converted into a competitive topography question. All you have to do now is to fill out the list of brands you want to test and the list of attributes. If you're not sure which items to add, or you have a very long list, it may be wise to first run a MaxDiff survey to narrow down a longer list of attributes to just 5 to 7 of the most important ones. Alternatively, you can ask an unaided open-ended question, and code the answers, arriving at the list of the most frequently-referenced brands and attributes. If you already know what you want to test, you can proceed without this extra step.

The model works best with at least 4 entities and 4 attributes, and you can add up to 10 items in each list. Please note that we recommend having ~7 items in each list in order to get the most informative visualizations. In the right bottom corner of the attributes list you will find a combobox with a preset library of common attributes broken down to three groups - product attributes, service attributes, and general brand attributes.

As with most other question types, you have the ability to illustrate every field and randomize the order in which they will be presented. If you don't want certain options to be randomized, you can easily anchor them in place so they appear in the same order for each respondent.

Perceptual mapping based on Star Rating

You'll have to choose whether you want respondents to rate each brand with stars (which is the default mode on our platform) or sliders. If you choose to use a star rating, you'll be able to use 5, 7 or 9 stars for each attribute. Choosing sliders will give you more flexibility. You'll be able to choose from our library of pre-written Likert scales, edit the answers, write your own custom answers, and even adjust the scores from 1 to 99, which are provided automatically for you when an answer is chosen. By default we assume 10 points for one star OR the lowest answer on your Likert Scale, and we go all the way up to 50, 70, or 90 for a top rating. If you decide to edit the Likert scale OR adjust the scoring, please make sure you know exactly what you're doing, since it may drastically affect your model and data visualization.

Perceptual mapping based on a Likert Scale

The last important decision here is to choose how to group these two lists. By default, we'll group them by entities - or brands, in our example. That means that each brand will be presented as a separate question, with attributes listed as sub-questions below. This grouping may be easier on respondents since it helps them to activate memories of all their experiences with a given brand or entity, enabling respondents to rate each brand or entity by all attributes you're testing.

Survey Preview. Grouped by entity

If you switch to 'group by attribute', each question will ask about one attribute at a time, such as "Food healthiness", and will contain all compared brands on the page. It might introduce a higher cognitive toll on respondents, since they'll have to access a lot more memories across brands in order to answer each of the questions. Even so, in some cases this might be more valuable, since it'll help focus attention on comparing all brands across a given attribute. Please note that this experiment will take as many questions as there are items in the list you're presenting.

Survey Preview. Grouped by attribute

It's important to make sure the question is still appropriate for your case. We have four pre-written question texts, designed for star ratings and sliders in both modes - grouped by entities and by attributes. Our platform will suggest default text as you adjust the parameters of the experiment, as long as the field is empty or untouched. If you edited the field already, we won't mess with your text, but you'll have to understand how it works and carefully test it out. You may notice the internal piping here. If you group by entities, the word [entity] will be replaced by a brand as you roll over it, helping you preview how each of the questions will read in the survey. The same thing will happen with your attributes. If you accidentally remove the magic word in square brackets, don't panic. You can type it back in or click on the warning that will appear underneath. You can click on the question icon to read a quick blurb about how it works. Clicking on the text of the warning will insert the code at the end of your question. Make sure you move it to the appropriate part of your sentence. We recommend watching the tutorial video for this question type to see it in action.

You can combine the Competitive Topography test with any other questions in your survey. As with any other survey, please test your entire questionnaire carefully in Preview mode, and launch it as you would any other study on our platform.

To get the most reliable insights, we recommend that you run this research model with ~750 completes.

Perceptual mapping based on Multidimensional Scale

As data starts streaming in, we'll crunch the numbers in a multidimensional space to negotiate all distances between entities and attributes and will plot a 3D data visualizations to help you explore and present the findings.

There are three distinct ways you can visualize the results: Multidimensional Scale, Topography View, and Quadrant Analysis. You can toggle among them at the top of this test.

The multi-dimensional scale is closest to classic perceptual mapping visualization. Essentially the rendering treats each of the attributes you added as a separate dimension or axis in a hard-to-imagine multidimensional space. It looks for a balance among all the forces and puts your tested entities into very specific positions in relation to each other within this space. Then our rendering runs up to 200 attempts to find the most accurate two-dimensional version it can create of this multidimensional model, to make it comprehensible. We list the R square (or 'projection accuracy') in percents. It usually varies from 95 to 100 percent.

Let's take a closer look at the visualization. The first thing you'll notice is that the brands are depicted with color dots and have horizontal labels. Attributes are shown as axes, meeting in the center and labeled along each line. Some attributes might be so close to each other in the minds of survey takers that we'll have to blend them into a single line and list both labels one after another. In such cases, the labels will go in the same order as the blue dots on the axis.

Please note that some axes depicting attributes are in bold - these were the most differentiated attributes according to respondents. We can't automatically assume that these are the most important to the industry, or consumers - let's just say these had the most variation from entity to entity (brand to brand), and therefore we're very confident that if an entity/brand performs well or poorly on a boldface trait, it stands out considerably from the others in respondents' minds.

Next you may notice that some axes are shorter than others. Each of them ends with a blue dot, which we call the 'epicenter' of an attribute. If an attribute is short and close to the center, it means this attribute was similar among all your tested brands and wasn't associated strongly with any particular one.

Exploring Taco Bell on Multidimensional Scale

When analyzing this visualization, please pay close attention to the proximity of the brands and attributes' EPICENTERS, not the axes. For example, on the illustration above, Taco Bell is equally close to both the 'Convenience/location' and 'Price' axes, but it's much closer to the epicenter of the 'Prices' attribute, which is what matters. We can interpret it as that Taco Bell was closely associated with good pricing. We can also conclude that in consumers' perceptions both Burger King and Taco Bell were seen as similar to each other, with strong associations of affordability, variety, speed, and convenience of locations. They were not specifically associated with 'food quality/taste' or 'food healthiness,' which are on the opposite side of the map. Subway, on the other hand, was different from all the other brands tested here. We can see that it came closest to the epicenter of food healthiness, which was one of the strongest differentiating attributes in our study. Arby's and Chick-fil-A were most closely affiliated with taste and cleanliness, and McDonald's with convenience of location and prices. In fact you can read the most associated attributes for each brand from the table below. We'll list them in descending order, with the average score each brand received for the corresponding attribute.

Exploring Subway on Multidimensional Scale

Speaking of scores, proximity of a brand to an attribute's epicenter can't always tell the full story about what forces resulted in a brand hovering at a given place on the competitive map. In order to find out how strongly each brand has performed, simply click on the corresponding dot or brand label. If you click on Subway, for instance, you'll notice colored beams appear from the center. The length of the beam represents the average score that the brand reached for that particular attribute. In order for a beam to reach the blue dot which marks an attribute's epicenter, ALL respondents would have to have given the maximum possible rating for it. In reality that almost never happens, but it helps you to judge how close a brand was able to come to dominating an attribute. By using this visualization, Subway was very highly rated on both 'Food healthiness' and 'Convenience of location.' The reason why it's not closer to the center of the map is that 'Food healthiness' was a stronger force, and there's enough pull from a few other axes to keep the brand in the right top corner.

Next thing that you'll notice is the color. Subway was rated higher than other brands by most of the attributes, which is why we present it in green. Please refer to the legend for the relative breakdown.

Exploring McDonald's on Multidimensional Scale

If we click on McDonald's, we'll see a very different picture: while it was in the top percentiles for 'Convenience/location' and 'Prices,' it was rated lower compared to other brands in 'Food healthiness' and 'Taste.'

You can export each view mode as an image in PNG, EPS, or PDF.

Topography View mode. Total scores for each entity

This is something that we invented here at AYTM. It's based on the same multidimensional scale model, but presented in an interactive, intuitive 3D model. By adding a third dimension we enable you to visualize the scores as the heights of a landscape while keeping the proximities intact between brands and attributes. The entities or brands are presented with color pins and horizontal labels, while the epicenters of attributes are marked with white flags and labeled vertically. You can rotate the model and explore it from any angle. You can zoom in or out by using the scroll wheel of your mouse or scroll gesture on a trackpad. The color helps highlight different heights to better represent mean scores, and you can refer to the legend on the bottom for the exact range of values. The higher the mountain and closer to the red side of the spectrum, the higher the mean score an entity has received. The deeper the valley and the closer to dark blue it is, the lower the mean score. From this summary view we can easily assess that Subway was far ahead of the other brands by total score, since it resides on the top of a high mountain while the rest are on the foothills or in the valley. You may also see that the closest attribute to the Subway mountain was again "Food healthiness," while McDonald's was very close to the epicenter of the "Convenience/location" attribute.

You can look at your insights from the perspective of either brands or attributes, and it doesn't matter how you grouped your lists during survey collection. Entities or brands is the default setting. The combobox on the right allows you to control what variable we will visualize using the height of the terrain.

Switching from the total mean scores to any of the individual brands will show a different picture every time. For example, here's the terrain for McDonald's. It has a peak of Convenience of location, indicating that this was perceived as McDonald's strongest trait, and a valley of food healthiness, indicating that this was perceived as McDonald's weakest trait. All the other qualities are arranged on different heights in between. Please note that in the table below you'll see mean scores for each of the brand's attributes.

Exploring McDonald's on Topography View

Now if we switch to Subway, we have a very different picture - most of the attributes of this chain were rated very highly except for prices, which formed a small but deep valley in the center.

Exploring Subway on Topography

Another way to look at the data is by Attributes. The default view shows the summary of all mean scores given to all brands by different attributes. You can learn something additional here - that most attributes were rated approximately the same, with a couple of small orange hills around cleanliness, taste, speed, and convenience, while food healthiness (surprise, surprise) was rated drastically lower for these fast food brands.

Topography View mode. Total scores for each attribute

As before, you can select any individual attribute from the list on the right and see which brand dominated that attribute. We already learned that Food healthiness is strongly associated with Subway, followed by Chick-fil-A and Wendy's. When we check out Prices, on the other hand, we can see which brands are perceived as most and least affordable. Even though Subway has a lower rating for prices in comparison to its ratings on other attributes, when compared to the other brands we tested, its prices are still rated very well. Check the exact scores in the table below to get precise readings.

Topography View mode. Prices

Two combo boxes below allow you to declutter the visualization, which is especially useful for the summary view modes when too many elements are competing for your attention. Hiding an entity or attribute label will not alter the underlying model, it'll simply remove the element from the screen so that you can export exactly what you need to illustrate and communicate the finding in your presentation.

You can export any view as an image, or as a vector graphic which can be scaled to any size without compromising the quality. Please note that when you export the survey into PowerPoint, only the current 3D view will be included. If you'd like to show more than one view, you'll need to manually export each image and paste it into your presentation.

Quadrant view

Last but not least - Quadrant View. By the way, if all you need is this view, you can save a lot of money by ordering the Quadrant Analysis question type instead of the Competitive Topography. This view is much simpler to understand and has no underlying statistical analysis whatsoever. It allows you to assign any of the attributes to each of the axes and the entities will be positioned accordingly on the grid. You can also use size and color to visualize two more attributes. For example, "Convenience/location" is set as the x-axis in the illustration above and "Food healthiness" as the y-axis, "food quality/taste" as size and "Food menu options/variety" as color. It helps quickly identify the leaders by all four attributes.

It wouldn't be fun if we hadn't added an extra twist to this view mode, on top of what you'd normally expect from a quadrant analysis. When you click on a circle, you can see the actual distribution of answer combinations by the current x & y attributes. Why does it matter? Since the position of the brands is a mean of all collected ratings, sometimes it's unclear what distribution of answers resulted in such a mean. For example, respondents could have been very polarized in their ratings, or they all consistently gave an average rating. In both cases the mean score and location of the brand would be very similar, and you wouldn't know the underlying truth. To discover the exact number of people who gave a certain combination of ratings, hover over the grey dots on the grid. The circle in the left bottom corner, for example, represents one star for "Convenience" and one star for "food healthiness," and 36 people gave that combination of ratings. The largest grey circle seems to be in the center of the top row, and this is because it was the most popular combination - 148 people gave the chain 7 stars for "convenience" and 4 for "healthiness."

Exploring McDonald's on the Quadrant View

Another useful thing about quadrant view is that you can see mean ratings for up to 4 selected attributes at a time in the grid below. Click on the headers to sort in ascending or descending order.

Probably the most amazing thing about Competitive Topography and other research tests that you can run on the AYTM platform, besides being great interactive visualizations and easy to use, is that they're fully integrated into the stats page. This means that you can apply any combination of filters by demographics and/or traits, and have the numbers re-crunched in almost real time for you. You can see, for example, how perception of these fast food restaurants varied between genders or among age groups. You could even select a subset of respondents, such as those who are most loyal to Chick-fil-A, and view their overall perspective on our competitive set of quick service restaurants.

We encourage you to play with the demo survey and watch the tutorial video above. Feel free to ping us with questions or to set up a personal demo.



 Conjoint question type

Our conjoint experiment uses one of the most robust and sophisticated research methodologies (choice-based conjoint) to find the most desired combination of features for your future product, service or package out of the tens of thousands of possible permutations. You can also use it to optimize your offering for prominent clusters of your target audience.

Choice-based conjoint takes in all possible attributes and options within each attribute, creates a dazzling number of unique combinations out of them, and asks respondents to choose one package out of several presented side by side on the screen. After going through several of such screens, the model gets enough information to learn about each attribute and option in comparison to the others. Once the survey receives enough responses for the model to be satisfied, you will be able to see the importance of each attribute, the strength of each possible package, and the incremental value that each option adds or subtracts from it, allowing you to build and fine tune your perfect product or service or an entire product line.

Conjoint Express

Use Conjoint Express when you need to minimize your costs by asking respondents to respond to the minimum number of screens. Conjoint Express provides real-time analysis, and it will learn about the preferences of your entire sample group (or a subset of it if you apply filters), but it won’t be able to tell you anything at the individual level of each respondent.

Conjoint Segmentation

Conjoint Segmentation is a more expensive option, since it will ask respondents to go through approximately twice as many screens to better learn their individual preferences. When a survey is finished fielding, it will take 5-15 minutes to crunch the numbers using the gold standard of the market research industry, Hierarchical Bayesian Modeling. Conjoint Segmentation will arrive at similar results as Conjoint Express, but with a much higher confidence. It will also look into clusters of respondents for you and automagically identify different personas, if such personas emerge from the data.

Setup is identical for both types of conjoint tests, so you can choose either Conjoint Express or Conjoint Segmentation any time, right up until you’re ready to launch your survey.

In the survey editor

AYTM conjoint tests can be added just like you would add any of our other question types—you can simply drag and drop it from the sidebar and add it anywhere you want in the survey. You can also convert an existing question into a conjoint question type. Once selected, simply fill out the list of attributes you want to test, and the list of options you’d like to test within each attribute. Then decide how many columns to show side by side on each screen of the conjoint test; fewer columns will result in more screens displayed.

You can upload photos to this question type.

The order in which items appear for each respondent can be randomized if global randomization is ON in the survey. Each specific item can be anchored to its position to make an exception from the global randomization rule.

You can also add an N/A option.

With the Pro survey authoring package or a paid AYTM membership, we give you enough space to enter up to seven attributes with seven options each, or fewer attributes with a larger number of options. As you approach the limit, we’ll show a warning on the right side of the page, alerting you to the number of remaining or extra combinations, so that you can appropriately manage your design. Please don’t hesitate to ping us if you need assistance in entering your options.

If you would like to look at the design sheet that our platform will produce for you, you are more than welcome to process and download it to analyze on your end. If you happen to have a design sheet created on another platform, you can upload it to your research test here.

We recommend running this research model with 750 to 1,500 completes. The smallest number of responses required to launch a Conjoint Express test is 400.

In the survey widget

Conjoint Express and Conjoint Segmentation come with a live simulator visualization built into the stats page. Conjoint Express will populate it once there is enough information to build a model. Please pay attention to the warnings that will alert you if the sample size is too small for drawing conclusions. Conjoint Express will update the findings every time new responses are available on the page.

Since Conjoint Segmentation data analysis takes a few minutes to process, it will be initiated automatically only when the survey is fully completed and out of field. During fielding, once at least 400 responses are collected, you’ll have an option to manually initiate the analysis cycle and see interim results; we recommend waiting until the full data set is available and analyzed.

By default you will see an average package identified by the model. Click the “Best” button to roll all columns up and show the best possible combination of the considered options. If some of the options are truncated you can temporarily hide a few neighboring columns to read the full name of the option.

The importance level of every attribute identified by the model is visible in the table, and is expressed through the height of the columns in the visualization. The higher a column, the more important it is in the model and the greater impact on the desirability of the package its options will have. Sorting the table by importance will instantly update the visualization.

You can fine tune the package at any time by substituting its options manually. By clicking on an option you will either add or subtract some perceived value from the package and the model will tell you exactly how much of a trade-off you’ll be making. You can simulate the strength of thousands of combinations just by interacting with this visualization. Use your scroll-wheel or trackpad to quickly test any option in any column; the visualization will adjust itself and show you the new combination you selected.

When options have a very similar probability impact, we’ll have to hide some to keep the simulator clear. To see them all you have two options.

  • First, you can rollover the column and we’ll show as many as will fit in the list. When you rollover an item in the list, a black call out with a number will be rendered next to the triangle, signifying the actual location of the option on the scale, even though its label may have been pushed further up or down by other items.
  • Second, you can expand the table view. You can do it by clicking Expand on any of the attribute lines, or by clicking Expand All in the header of the table. You can click on an option and see the corresponding column above scroll up or down to autoselect it for you. You will see the incremental probability impacts for every option in the table, as well as the overall package strength and composition.

You can export the current view of the emulator as an image in PNG, EPS, or PDF format. You can also get the ten most desirable packages as a slide in your PowerPoint report by selecting it in the export section of the sidebar.

You can apply any combination of filters by demographics and/or traits, and have the numbers re-crunched in almost real time.

Additional Conjoint Segmentation information

In Conjoint Segmentation, we automatically conduct sophisticated cluster analysis of your data, and our algorithm will connect emerging subsets of the sample with other respondent information such as traits, as well as answers to other questions in the survey. This is a live customer persona generation engine, which will label personas with a hypothetical name and a photo, to make it easier to distinguish and navigate among them. Our engine approximates the most prominent personas in your sample group and shows the package that is perfect for each of them.

Please bear in mind that we’re not operating in terms of clear-cut filters. When we list the gender, age, and other traits under a persona, it doesn't mean that everyone in this cluster falls within this description. It tells us that these traits were more prevalent in this cluster, statistically speaking, and were best suited to describe the group. Another explanation or description of the persona may exist outside of the dataset, unavailable to our algorithm, so you may want to consider bringing everything you have into the survey. We’re happy to assist if you’re surveying your existing customers, for example, and would like to add your existing transactional background information into the experiment.

The icons on top let you toggle among each persona and the cumulative sample here. You can hide and expand the persona description section to manage your screen space; and of course, you can export the findings for each persona separately.

You can switch between Market share estimates and raw coefficients (also known as Conjoint Utility scores), which are used to calculate the market share, probability impact, package strength, etc. Market Share is a relative mode, helping you understand the implications of swapping any option and projected performance of the package when compared to an average package. Utility scores is an absolute mode of looking at each option and how much “power” the model assigned to it based on responses.

On the stats page: package results

On the stats page: package results with personas


See an example of results on a stats page
This question type doesn't work with the AYTM Personality Radar