Which Conjoint Method Should I Use?


We originally published an article by this title in the Fall 1996 issue of Sawtooth Solutions. Interest in that paper along with a steady flow of new developments in the conjoint analysis field have led us to update this piece now six times.

For an excellent six-minute video introduction to conjoint analysis, see:


There are multiple conjoint analysis approaches. Depending on your project, one method might work better than another. Among other things, sample size, complexity of the attribute list, length of the survey, and mode of interviewing (device-based or paper-based) lead researchers to select one flavor of conjoint analysis in favor of another.

An interactive adviser on our website helps guide the decision of conjoint method is found here: Interactive Advisor. This article provides greater depth for understanding the issues involved with choosing a conjoint analysis approach.


Conjoint analysis has become one of the most widely used quantitative tools in marketing research. According to recent Sawtooth Software customer surveys, we estimate that from 10,000 to 13,000 conjoint studies are conducted each year by our customers. When used properly, it provides reliable and useful results. There are multiple conjoint methods. Just as the golfer doesn’t rely on a single club, the conjoint researcher should weigh each research situation and pick the appropriate tool.

We at Sawtooth Software have been producing a variety of conjoint analysis systems since 1985. The older systems involve rating product concepts on sliding scales (such as 1 to 9) or on a 100-point scale. Our newer systems ask respondents to choose products from a choice scenario or menu. Although many still use older ratings-based approaches and there is evidence they can work well when designed and executed correctly, the vast majority of researchers today favor the choice-based approaches.

The Classic Ratings-Based Systems

Paul Green and colleagues introduced the first method of conjoint analysis to the market research community in the early 1970s. It involved asking respondents to rate (or rank) a series of concept cards (where each card displayed a product concept consisting of multiple attributes). Respondents typically rated between a dozen to thirty cards described on up to somewhere around six attributes.

Watch (the late) Paul Green describe a very early conjoint analysis project for the Bissel company (carpet sweepers & cleaners).


At the time, Paul Green and colleagues felt respondents couldn’t deal with more than about six attributes without resorting to problematic simplification strategies. But, perhaps the greater limitation was that increasing the number of attributes meant that even more cards had to be presented to respondents to obtain good results. At some point, respondents would burn out and not give good responses to increasingly deeper decks of cards. This first conjoint technique was called “card-sort conjoint.” Sawtooth Software’s CVA system does this flavor of conjoint analysis, as well as an extension involving paired comparison judgments, where respondents compare two cards at a time. The traditional ratings-based conjoint method still is used today, albeit infrequently. Our annual tracking survey of our customers found that CVA-type studies accounted for 3% of the conjoint analysis studies conducted last year.

In 1985, Sawtooth Software released an adaptive ratings-based conjoint analysis software system called ACA (Adaptive Conjoint Analysis). ACA went on to become the most popular conjoint software tool and method throughout the 1990s. ACA’s main advantage was its ability to measure more attributes than was advisable with the earlier card-sort conjoint approach. With ACA, it was possible to study a dozen to two-dozen attributes, while still keeping the respondent engaged and providing good data. ACA accomplished this by having varying sections of the interview that adapted to respondents’ previous answers. Each section presented only one or a few attributes at a time so as not to overwhelm the respondent with too much information at once. The software led the respondent through a systematic investigation over all attributes, resulting in a full set of preference scores for the levels of interest (part-worth utilities) by the end of the interview.

In terms of limitations, the foremost was that ACA needed to be computer-administered. The interview adapts to respondents’ previous answers, which cannot be done via paper-and-pencil. Like most traditional conjoint approaches, ACA is a main-effects model. This means that part-worth utilities for attributes are measured in an “all else equal” context, without the inclusion of attribute interactions. This can be limiting for some pricing studies where it is sometimes important to estimate price sensitivity for each brand in the study. ACA also exhibited another limitation with respect to pricing studies: when price was included as just one of many variables, its importance was likely to be understated, and the degree of understatement increased as the number of attributes studied increased.

Some researchers continue to use ACA today: our recent tracking survey shows that ACA accounted for 2% of all conjoint studies conducted by our customers last year. But, these researchers tend to avoid pricing applications and also take care to implement the latest best practices for ACA research. For example, the self-explicated importance questions near the beginning of the interview have been problematic if not administered well. The ACA documentation and recent white papers from Sawtooth Software discuss methods to improve this potentially troublesome area. Despite the historical importance of ACA, newer techniques have generally proven to work better and be more popular in practice: CBC and ACBC (Adaptive CBC).

Choice-Based Conjoint (CBC)

Choice-Based Conjoint analysis started to become popular in the early 1990s and since about 2000 became the most widely used conjoint technique in the world (accounting for 79% of conjoint analysis studies conducted by our customers last year). CBC questions closely mimic the purchase process for products in competitive contexts. Instead of rating or ranking product concepts, respondents are shown a set of products on the screen and asked to indicate which one they would purchase:

If you were shopping for a credit card, and these were your only options, which would you choose?

No annual fee
14% interest rate
$1,000 credit limit

$40 annual fee
10% interest rate
$2,000 credit limit

$20 annual fee
18% interest rate
$5,000 credit limit

NONE: I wouldn't choose any of these.

This example shows just three product concepts and a “None.” As in the real world, respondents can decline to purchase in a CBC interview by choosing “None.”

We have posted an excellent interactive example of CBC for introducing the technique at: http://www.sawtoothsoftware.com/surveys/baseball/login.html . This example includes a 9-question CBC survey and displays counting scores for your choices (or for a group of people if you provide a groupID at the start of the survey).

If the aim of conjoint research is to predict product or service choices, it seems natural to use data resulting from choices. Many CBC projects (especially packaged goods research) will involve showing a dozen or more products on the screen, often graphically displayed as if they were on physical shelves of a store. We generally recommend, whenever it is possible and realistic, that researchers show more rather than fewer product concepts per choice task.

Despite the benefits of choice data, they contain less information than ratings per unit of respondent effort. After evaluating multiple product concepts, the respondent tells us which one is preferred. We do not learn whether it was strongly or just barely preferred to the others; nor do we learn the relative preference among the rejected alternatives.

Our CBC system can include up to 10 attributes with 15 levels each (unless using the Advanced Design Module, where up to 250 attributes with 254 levels per attribute are permitted), though we’d never recommend you challenge these limits. CBC can be administered via CAPI or Internet surveys, or via paper‑and‑pencil. CBC can be analyzed by pooling (aggregating) the choices across respondents via counting or aggregate logit. This is often a valuable place to begin as you start to analyze a CBC survey. For their final models, most CBC researchers estimate individual-level part-worth utility scores using hierarchical Bayes, which is a built-in feature of our CBC software. Some researchers also investigate underlying market segments with relatively homogeneous preferences via the available latent class analysis option.

Partial-Profile CBC

Many researchers that favor choice-based conjoint rather than ratings-based approaches have looked for ways to increase the number of attributes that can be measured effectively using CBC. One solution that gained some following over the last two decades is partial-profile CBC (an option within our CBC software). With partial-profile CBC, each choice question includes a subset of the total number of attributes being studied. These attributes are randomly rotated into the tasks, so across all tasks in the survey each respondent typically considers all attributes and levels.

The problem with partial-profile CBC is that the data are spread quite thin, because each task has many attribute omissions, and the response is still the less informative (though more natural) 0/1 choice. As a result, partial-profile CBC often requires larger sample sizes to stabilize results, and individual-level estimation under HB doesn’t always produce stable individual-level part-worths. Despite these shortcomings, some researchers who used to use ACA for studying many attributes shifted to partial-profile choice. The individual-level parameters have less stability than with ACA, but if the main goal is achieving accurate market simulations (and large enough samples are used), some researchers are willing to give up the individual-level stability.

Lately, we’ve come to realize that partial-profile CBC studies may be subject to a similar price bias as ACA (though not as pronounced). Recent split-sample studies presented at the Sawtooth Software conferences have shown that price tends to carry less weight, relative to the other attributes, when estimated under partial-profile CBC rather than full-profile. Furthermore, partial-profile methods assume that respondents can ignore omitted attributes and base their choice solely on the partial information presented in each task. If respondents cannot, then this biases the final part-worth utilities. For this and other reasons, most researchers and academics favor full-profile conjoint techniques that display all attributes being studied within each choice task.

Adaptive CBC (ACBC)

Choice-based rather than ratings-based conjoint methods have become dominant in our industry. Yet, standard CBC questionnaires can seem tedious to respondents, repetitive, and to lack relevance. The same-looking choice tasks repeat and repeat. Products shown to respondents seem all “over the board” and not very often near what the respondent actually wants.

Recently, Sawtooth Software developed a new approach called Adaptive CBC (ACBC), leveraging aspects of adaptive conjoint analysis (ACA) and CBC. According to our tracking survey, 13% of conjoint analysis studies conducted by our customers employed ACBC. ACBC first asks respondents to identify the product closest to their ideal using a configurator (Build Your Own—BYO) question. The BYO task also serves as an excellent training exercise, to acquaint respondents with the attributes and levels being studied. Next, we build typically a couple dozen product concepts for the respondent to consider, all quite similar (near neighbors) to the BYO product. Respondents indicate which of those they would consider. Considered products are taken forward to a choice tournament to identify the overall best concept, where the choice tournament tasks look very much like standard CBC tasks.

Recent evidence suggests that respondents find the ACBC interview more engaging and realistic, even though the interview generally takes longer than CBC to complete. But, sample size requirements are smaller than standard CBC, because more information is captured from each individual. More information at the individual level also leads to better segmentation work. Early evidence also suggests validity (accuracy of predicting actual sales) on par or slightly better than CBC. Furthermore, ACBC interviews directly capture what percent of respondents find each attribute level to be “must have” or “unacceptable.”

ACBC is generally useful when your project involves about five or more attributes. Projects involving fewer attributes, especially brand + package + price probably work better using the standard CBC method.

Menu-Based Choice (MBC)

There are many things that are bought today from menus, where buyers select from one to many options to assemble the final product to purchase. Restaurant menus are a classic example. Computers, cars, insurance policies, cable/internet/phone service are others. Such menus can investigate complex issues such as mixed bundling, where buyers can purchase pre-configured bundles at a discount or buy individual components a la carte.

If you face a situation where buyers typically face a menu instead of a single choice among pre-defined product configurations, then your conjoint questionnaire should also mimic that buying process. Trying to force the study into the discrete choice format of CBC would probably be counterproductive. The context of menu choice is different from CBC, leading to different utility effects and predictions of buyer behavior.

Sawtooth Software offers an MBC (Menu-Based Choice) analysis package. It is the most flexible and advanced conjoint analysis software we’ve produced. MBC studies accounted for 3% of the conjoint analysis studies conducted by our customers last year. MBC studies can be quite complex to design, program, and analyze properly. They also often require larger sample sizes than typical CBC studies. This is the realm of the expert conjoint analysis researcher who has significant depth in design of experiments and econometric modeling. Budget much more time for the analysis phase than the other conjoint methods.

Relative Conjoint Method Usage
(Sawtooth Software 2014 Customer Survey)
CBC (Choice-Based Conjoint) 79%
ACBC (Adaptive Choice-Based Conjoint) 13%
MBC (Menu-Based Choice) 3%
CVA (Traditional Ratings-Based Conjoint) 3%
ACA (Adaptive Conjoint Analysis) 2%

So Which Should I Use?

You should choose a method that adequately reflects how buyers make decisions in the actual marketplace. This includes not only the competitive context, but the way in which products are described (text), displayed (multi-media or physical prototypes), and purchased (single choice or menu). Although ratings-based methods (CVA and ACA) were popular prior to 2000, the vast majority of research conducted today uses choice-based methods. It is difficult to imagine situations today where we would use a ratings-based conjoint method rather than choice-based methods.

Key decision areas and how they affect choice of conjoint method are as follows:

  • Number of Attributes. If you need to study many attributes (especially eight or more), ACA historically was considered a solid approach. More recently, ACBC seems more effective—especially for projects involving price as an attribute. Three or fewer attributes would favor CBC.
  • Mode of Interviewing. In many cases, survey populations don’t have access to computers. If your study must be administered paper‑and‑pencil, first consider using CBC, with CVA also being a option under conditions of very small sample size (see below). Many respondents today elect to take surveys via small devices—even 4-inch display smartphones. Although it would seem that results for complex conjoint surveys would be poor for respondents who use their smartphones to complete them, recent results from two independent research organizations (See the 2013 Sawtooth Software Conference Proceedings: Diener et al. (pp 55-68) and White (pp 69-82). Download from: http://www.sawtoothsoftware.com/downloadPDF.php?file=2013Proceedings.pdf) show that the quality of conjoint surveys completed on the smartphone is just as good as with large monitors (desktops and laptops). We should emphasize, that these tests looked as results for respondents who self-selected to complete the surveys on smartphones (presumably because they were comfortable using their smartphone as a web browser), not those who were assigned to complete the survey on a smartphone. The researchers also implemented best practices for displaying conjoint tasks on the small devices.
  • Sample Size. If you are dealing with relatively small sample sizes (especially less than 100), you should be cautious about using CBC, unless respondents are able to answer more than the usual number of choice tasks. ACBC and the older ratings-based approaches (such as ACA and CVA) are able to stabilize estimates using relatively smaller samples than CBC. If interviewing must be done on paper, and very small sample sizes are the norm (such as 30 or fewer), you should consider CVA.
  • Interview Time. If you only have a few minutes to use in conjoint questions, CBC is a good alternative, though you may need to compensate for the limited information from each individual by sharply increasing the sample size. With about eight or more minutes available, ACBC is feasible.
  • Pricing Research. If studying price, CBC and ACBC are generally preferred.
  • Menus. If the product you are studying is purchased via a multi-select menu, then MBC is the appropriate technique (assuming large sample sizes, larger budget for analysis, and the experienced conjoint researcher).

Contact Us

If you have any questions about conjoint analysis or choosing the appropriate technique, please contact Sawtooth Software at This email address is being protected from spambots. You need JavaScript enabled to view it. or by phoning +1 801 477 4700. We also recommend you bookmark and visit the following resources:

View a PDF version of this document