Sawtooth Software: The Survey Software of Choice

Which Conjoint Method Should I Use?

We originally published an article with this title in the Fall, 1996 issue of Sawtooth Solutions. With the release of Sawtooth Software's ICE (Individual Choice Estimation) Module, that article is now somewhat obsolete. A well-known barrier has been overcome: CBC users can now get individual-level utilities from choice data. It's paradoxical that this liberating breakthrough now makes it more difficult to choose between conjoint methods. The increased length of this article not only reflects ICE's contribution to the equation, but the influence of a paper presented at the 1997 Sawtooth Software Conference by Joel Huber, entitled: "What We Have Learned from 20 Years of Conjoint Research: When to Use Self-Explicated, Graded Pairs, Full Profiles or Choice Experiments."

Introduction

Conjoint analysis comes in a variety of forms. Sawtooth Software offers a suite of conjoint software packages: Adaptive Conjoint Analysis (ACA), Conjoint Value Analysis (CVA), and Choice-based Conjoint (CBC) with its Individual Choice Estimation (ICE) and Latent Class Modules. It makes little sense to argue which of these is the overall best approach. We have designed each package to bring unique advantages to different research situations.

Adaptive Conjoint Analysis (ACA)

The first version of ACA, released in 1985, was Sawtooth Software's first conjoint product. Since then, ACA has become the most popular conjoint method in both Europe and in the US.

ACA's main advantage is its ability to measure more attributes than is possible with traditional full-profile conjoint. In ACA, respondents do not evaluate all attributes at the same time, which helps solve the problem of "information overload" that plagues many full-profile studies. We believe respondents cannot effectively process more than about 6 attributes at a time in full-profile context. ACA can include up to 30 attributes, although typical ACA projects involve about 8 to 15 attributes. With six or fewer attributes, ACA's results are similar to those of the full-profile approach.

In terms of limitations, the foremost is that ACA must be computer-administered. The interview adapts to respondents' previous answers, which cannot be done via paper-and-pencil. Like most traditional conjoint approaches, ACA is a main-effects model. This means that utilities for attributes are measured in an "all else equal" context, without the inclusion of attribute interactions. ACA also exhibits another limitation with respect to pricing studies: when price is included as just one of many variables, its importance is likely to be underestimated.

ACA is a hybrid approach, combining direct evaluations about attributes and levels with conjoint pairwise comparisons. The first part of the interview uses a self-explicated approach. Respondents rank (or rate) attribute levels, and then assign a weight (importance) to each attribute. The self-explicated context puts emphasis on evaluating products in a systematic, feature-by-feature manner, rather than judging products as a whole or in a competitive context.

Using the information from the self-explicated section, ACA then presents trade-off questions. Two products are shown, and respondents indicate which is preferred, using a relative rating scale. The product combinations are tailored to each respondent, to ensure that each is relevant and meaningfully challenging. Each of the products is displayed in partial-profile, meaning that only a subset (usually two or three) of the attributes is shown for any given question.

Huber states that pairwise comparisons reflect the sort of purchase behavior wherein buyers compare products side-by-side. ACA does well for modeling high-involvement purchases, where respondents focus on each of a number of product attributes before making a carefully-considered decision. Purchases for low involvement product categories described on only a few attributes and pricing research studies are probably better handled using another method.

Conjoint Value Analysis (CVA)

CVA brings full-profile conjoint to the arsenal of Sawtooth Software's conjoint tools. Full-profile conjoint has been a mainstay of the conjoint community for decades now. We believe the full-profile approach is useful for measuring up to about six attributes. That number varies from project to project depending on the attribute text, the respondents' familiarity with the category, and whether attributes are shown as prototypes or pictures. CVA is designed for paper-and-pencil studies, whereas ACA must be administered via computer. CVA can also be used for computerized interviews when combined with the Ci3 System for Computer Interviewing.

CVA calculates a set of utilities for each individual, using traditional full-profile card-sort (either ratings or ranked) or pair-wise ratings. Up to 10 attributes with 15 levels can be measured, as long as the total does not exceed 100 parameters.

Through the use of compound attributes, CVA can measure interactions between attributes such as brand and price. Compound attributes are created by including all combinations of levels from two or more attributes. For example, two attributes each with two levels can be combined into a single four-level attribute. However, interactions can only be measured in a limited sense with this approach. Interactions between attributes with more than 2 or 3 levels each are probably better measured using one of the aggregate approaches in CBC.

CVA can design pairwise conjoint questionnaires (like the ACA example above), or single-concept (card-sort) designs. Showing one product at a time encourages respondents to evaluate products individually, rather than in direct comparison with a competitive set of products. It focuses more on probing the acceptability of an offering than the differences between competitive products. If the comparative task is desired, CVA's pairwise approach may be used. Another alternative is to conduct a card-sort exercise. Though respondents view one product per card, in the process of evaluating the deck they usually compare them side-by-side and in sets.

Because respondents see the products in full-profile (all attributes at once), respondents tend to use simplification strategies when faced with so much information to process. Respondents may key on two or three salient attributes and largely ignore the others. Huber points out that buyers in the real world may also simplify when facing complex decisions for certain categories, so simplification isn't by definition always a bad thing.

In addition to traditional full-profile designs, CVA can attach prices to each attribute level to measure price sensitivities for individual features. This can be useful for determining price sensitivity for individually-priced components of a product bundle. This approach is realistic for modeling categories in which buyers actually see the prices for each component of the product, such as with restaurant meals, car insurance or cable packages.

Choice-Based Conjoint (CBC)

One of the most exciting recent innovations in conjoint research is the introduction of Choice-Based Conjoint. CBC interviews closely mimic the purchase process for products in competitive contexts. Instead of rating or ranking product concepts, respondents are shown a set of products on the screen (in full-profiles) and asked to indicate which one they would purchase. As in the real world, respondents can decline to purchase in a CBC interview by choosing "None." If the aim of conjoint research is to predict product or service choices, it seems natural to use data resulting from choices.

Huber argues that choice tasks are more immediate and concrete than abstract rating or ranking tasks. They seem to ask respondents how they would choose now, given a set of potential offerings. Choice tasks show sets of products, and therefore mimic buying behavior in competitive contexts. Because choice-based questions show sets of products in full-profile, they encourage even more respondent simplification than traditional full-profile questions. Attributes that are important will get even greater emphasis (importance), and less important factors will receive less emphasis relative to CVA or ACA.

CBC can measure up to six attributes with nine levels each (soon to be expanded to 8 attributes with 15 levels each with the release of CBC version 2). CBC can be administered by PC or via paper-and-pencil using the CBC Paper-And-Pencil Module. In contrast to either ACA or CVA, CBC results have traditionally been analyzed at the aggregate, or group level. But with the recent release of the ICE Module (Individual Choice Estimation), individual-level analysis is now accessible and practical. There are a number of ways to analyze choice results:

Aggregate Choice Analysis is useful for detecting and modeling subtle interactions, which may not always be revealed with individual-level models. Interactions can become important in many applications, such as pricing research, where it may be desirable to fit a separate price function for each brand. For most commercial applications, respondents cannot provide enough information with even ratings- or sorting-based approaches to measure interactions at the individual level. While these advantages seem to favor aggregate analysis from choice data, academics and practitioners have argued that consumers have unique preferences and idiosyncrasies, and that aggregate-level models which assume homogeneity cannot be as accurate as individual-level models. Aggregate CBC analysis also suffers from its IIA (Independence from Irrelevant Alternatives) assumption, often referred to as the Red Bus/Blue Bus problem. Very similar products in competitive scenarios can receive too much net share. IIA models also fail when there are differential cross-effects between brands.

Latent Class Analysis addresses respondent heterogeneity in choice data. Instead of developing a single set of utilities to represent all respondents, Latent Class simultaneously detects relatively homogeneous respondent segments and calculates segment-level utilities. If the market is truly segmented, Lclass can reveal much about market structure (including group membership for respondents) and improve the predictability of aggregate choice models. Subtle interactions also can be modeled in Lclass, which seems to offer a compromise position, leveraging the benefits of aggregate estimation while recognizing market heterogeneity. In addition, Lclass can be a valuable pre-processing step for ICE estimation. Sawtooth Software offers the CBC Latent Class Module as an add-on to the base CBC system.

ICE (Individual Choice Estimation) is a recent advance for calculating individual-level utilities from choice data. Over the past few years, Bayesian estimation techniques have shown promise for deriving utilities for individuals, but they have required enormous amounts of computing time and are not accessible to most researchers. Other methods have used standard approaches such as Multinomial Logit, but could only support limited designs.

ICE computes much faster than the Bayesian approaches, and can estimate a reasonably large set of main effect utilities for individuals. The general idea behind ICE was proposed by Rich Johnson at our 1997 Sawtooth Software Conference, in an article entitled: "Individual Utilities from Choice Data: A New Method." Johnson believes that the more computer-intensive Bayesian methods may very well prove to be the best overall approach once computers become fast enough. He expects ICE to be a good alternative for about the next five years.

While ICE seems to offer enormous benefits, it is limited to main effects models. But the aggregate approach which accommodates interactions suffers from IIA and does not recognize respondent heterogeneity. If interactions occur principally within individual preference structures (person i's disutility for spending money depends on the brand), then explicitly modeling interaction terms using aggregate logit or Lclass may be necessary for accurate share predictions. Which approach is appropriate for your situation may be difficult to tell. In general, we believe the benefits of individual-level utilities make a compelling argument for ICE. We have seen ICE estimation out-perform both Lclass and aggregate logit for predicting shares for holdout choices, even when there was very little heterogeneity in the data. If CBC's randomized designs are used, one can try all three approaches using the same data set and compare simulation results.

So Which Should I Use?

You should choose a method which adequately reflects how buyers make decisions in the actual marketplace. This includes not only the competitive context, but the way in which products are described (text) displayed (multi-media or physical prototypes), and considered. Is the product a high-involvement category for which respondents deliberate carefully on all of the features, or should the conjoint task encourage simplification?

If you need to study many attributes, ACA is probably the preferred approach. If you need to include attribute interactions in your models, you should probably use CBC. In many cases, survey populations don't have access to PCs, and it may be too expensive to bring PCs to them, or vice-versa. If your study must be administered paper-and-pencil, consider using CVA or CBC with its paper-and-pencil module.

Many researchers include more than one conjoint method in their surveys. For example, some studies need to measure a dozen or more attributes, and also require brand-specific demand curves. ACA followed by CBC can solve this problem within a single questionnaire. ACA would include all the attributes, while brand, price, and perhaps another key performance variable would be studied using CBC. ACA provides the product design and feature importance model, while CBC provides price sensitivity estimates for each brand and a powerful pricing simulator.

For some projects, it may be difficult to decide on which method to use. With the introduction of ICE, the lines which have defined the distinct capabilities of conjoint methods have become blurred. If this ambiguity still vexes you, it is comforting to recognize that the methods, though different in their approach, tend to give similar results.

(This article is an excerpt from a technical paper of the same title which may be downloaded from the Technical Papers Library on our home page: www.sawtoothsoftware.com).