Have you ever been asked to measure respondents' preferences for things such as brands, product features, job-related benefits, or product packaging? Have you ever been asked to prioritize a list of performance attributes or gauge the potential impact of different advertising claims? If so, you may wish to consider the class of trade-off related techniques available within MaxDiff, a component within the Lighthouse Studio system.
MaxDiff is an approach for obtaining preference/importance scores for multiple items (brand preferences, brand images, product features, advertising claims, etc.). Although MaxDiff shares much in common with conjoint analysis, it is easier to use (for the researcher, respondent, and end client) and applicable to a wider variety of research situations. (It is not a substitute for conjoint analysis, however, as conjoint offers unique benefits for studying products or services made up of complex features added together.)
With MaxDiff, respondents are shown a set (subset) of the possible items in the exercise and are asked to indicate (among this subset) the best and worst items (or most and least important, etc.):
Respondents typically complete a dozen or more such sets where each set contains a different subset of items. The combinations of items are designed very carefully with the goal that each item is shown an equal number of times and pairs of items are shown an equal number of times. Each respondent typically sees each item two or more times across the MaxDiff sets. MaxDiff exercises focus on estimating preference or importance scores for typically about 15 to 40 items—though hundreds of items could be accommodated in advanced applications.
Why use MaxDiff instead of standard rating scales? Research has shown that MaxDiff scores demonstrate greater discrimination among items and between respondents on the items. The MaxDiff question is simple to understand, so respondents from children to adults with a variety of educational and cultural backgrounds can provide reliable data. Since respondents make choices rather than expressing strength of preference using some numeric scale, there is no opportunity for scale use bias. This is an extremely valuable property for cross-cultural research studies.
MaxDiff makes it easy for researchers with only minimal exposure to statistics to conduct sophisticated research for the scaling of multiple items. The trade-off techniques used in MaxDiff are robust and easy to apply. The resulting item scores are also easy to interpret, as they are placed on a 0 to 100 point common scale and sum to 100.
Projects may be conducted over the Internet, using devices not connected to the internet (CAPI interviewing), or via paper-and-pencil questionnaires. MaxDiff may be used for designing, fielding, and analyzing:
- MaxDiff (best-worst scaling) experiments
- First choices from subsets of three items, four items, etc. (no "worst" choice)
- Method of Paired Comparisons (MPC) experiments (choices from pairs)
Item scores are typically estimated for each individual using a hierarchical Bayes (HB) methodology. The HB tool is built right into the interface, and with a few clicks the estimation begins. The default settings are quite robust, so users with very little background in statistics can obtain good results. HB is a powerful approach for stabilizing scores for each individual from sparse choice data. However, it is a computationally-intensive program that takes between 15 minutes to an hour for a typical MaxDiff dataset.
A built-in Latent Class capability is also offered, for discovering segments of respondents with similar needs/preferences.