It has been common practice in CBC questionnaires to use minimal overlap when designing choice tasks. Minimal overlap simply means that we don’t repeat a level within a task unless we have to. For example, consider a CBC study with four brands (A, B, C and D). We might design a choice task like the following:
In terms of statistical efficiency of main effects (the utility of each level considered independently), such tasks are optimal. For this reason, our CBC software has used minimal overlap designs by default. Also, if each attribute has at most four levels, it has seemed natural to show just four products on the screen, as we get full coverage of the attribute list in each task and it limits the amount of information respondents have to evaluate at one time.
But, minimal overlap’s allure of statistical efficiency and the desire to not overwhelm respondents with too many product concepts to consider per task has negative consequences that we recently have begun to appreciate.
It turns out that using these economical, minimal overlap designs encourages more simplification behavior and superficial information processing than the original card-sort conjoint approach. To illustrate this point, consider an extreme case: Imagine a respondent who has a “must-have” requirement that the product must be Brand B. Perhaps she works at the Brand B company, and therefore is intensely loyal. In each choice task, there is only one possible product she can choose. What are the outcomes? The respondent has an easy time answering the questionnaire (she simply scans each task for Brand B). The fit statistic from the individual-level HB model is extremely high since her answers are so predictable. And, we obtain a perfect hit rate for holdout tasks. But, we haven’t learned anything about how she values the remaining attributes beyond brand. Yet, in a real product choice, there are multiple Brand B models for her to choose among that differ on performance and price. Our model might perform poorly in predicting her actual product choice.
Certainly, not all our respondents are so extreme. But, recent evidence suggests that perhaps a majority of respondents’ behavior within CBC questionnaires can be explained assuming they are only reacting to at most two or three attribute levels. To the degree that respondents establish a few must-have or must-avoid features, minimal overlap questionnaires are not very useful for developing much deeper insights at the individual level than those top-most requirements.
Over the past 15 years experience with Sawtooth Software’s CBC module, we’ve reported average time spent with CBC tasks of around 12 to 15 seconds per task (once respondents are warmed up). We’ve seen relatively high hit rates of minimal overlap holdout tasks. These CBC questionnaires have featured minimal overlap and generally few (three to five) concepts per choice task. We’re embarrassed that it has taken us this long to connect the dots and see the weaknesses in minimal overlap CBC designs.
That’s not to say that economical, minimal overlap questionnaires haven’t produced valid insights at the segment and market level. Even if they have missed opportunities to probe very deeply into each respondent’s preference structure, when simplification strategies are heterogeneous across the sample, the population estimates can be relatively accurate, though not optimally so. And, HB estimation has done a great deal to “fill in the missing blanks” for each respondent (such as for attributes of secondary importance), by borrowing information from the population to infer these parameters of lesser, but still significant, importance.
Where next? What to do about it? One simple remedy is to show more product concepts per task. Imagine, for the example we showed at the beginning of this article, we had used six product concepts instead of just four. In approximately half of the tasks, our Brand B loyal respondent would have had two Brand B products to consider. We would learn more about how she traded off performance and price, after her certain choice of Brand B. Another remedy is to use Balanced Overlap designs instead of Complete Enumeration or Shortcut strategies. Balanced Overlap allows a modest degree of level overlap in the study, and only sacrifices a modest degree of traditional design efficiency.
Our recent R&D efforts in Adaptive CBC show promise of reducing the problems of minimal overlap designs. Adaptive surveys can quickly recognize that a respondent requires Brand B, and then all future tradeoffs will involve just Brand B products. This allows the system to learn more about demanding respondents beyond their first few must-have or must-avoid features. Not surprisingly, the adaptive questionnaires are more challenging and take longer to answer, but we believe the results are probably more accurate and realistic. Currently, about 50 beta testers are putting the software through its paces and helping us gain more experience about this new and interesting approach. We plan to start selling Adaptive CBC software in Q2 of 2009.