Sawtooth Software: The Survey Software of Choice

Reporting Preferences for Attribute Levels in Conjoint Analysis

There are myriad ways to present the basic results of conjoint/choice analysis. This short article discusses the basics and offers a few examples. We'll assume the reader already understands the essentials of estimating part-worth utilities and the basic rules for interpreting results. Please see the following two introductory articles (available on our website at for additional background information:

  • “Interpreting Conjoint Analysis Data”
  • “Analysis of Traditional Conjoint Using Excel: An Introductory Example”

Part-Worth Utility Scores

Sawtooth Software's market simulation tool (SMRT) includes an example dataset (TV dataset) that we often refer to in our documentation. The data were collected in 1997 regarding features for then-available midrange televisions. When using Sawtooth Software's SMRT market simulation tool, the software automatically reports average “zero-centered part-worth utilities” in the report window. The default display looks something like:

Within each attribute, the preference (utility) values sum to 0. This has been a long-time convention among academics and practitioners. It reflects the fact that the origin of the utility scale for each attribute is unknown and the utility of one level of one attribute cannot be directly compared to the utility of one level from another attribute. Given the relatively large scaling of the values, the default two decimal places of precision are really unnecessary if presenting these data to an audience.

Researchers find this display useful, but zero-centered utilities are often challenging to present to non-researchers, who often are disturbed by negative utility values. To avoid this issue, some researchers simply shift the utilities by a constant within each attribute so that the worst level of each is equal to zero. Rescaled in this way, the average part-worth utilities would look like this:

And, it's quite common to portray the results graphically, such as the following display:

If performing segmentation analysis, multiple series could be represented on the chart, reflecting the relative utilities for different segments of the population.

While it is quite common to present such a display, the main problem is that non-technical people may start to draw inappropriate conclusions. Since the origin for each attribute is zero, it is tempting to conclude that Sony (59 points) is more than twice as preferred as RCA (27 points). Part-worth utility data are interval scaled (rather than ratio scaled) and therefore do not support such ratio comparisons. If using the original zero-centered data, it is obvious that Sony (utility of 30.03) isn't twice as preferred as RCA (utility of -1.47). But, after shifting the data so that each attribute's worst level has a utility of zero, we've invited inappropriate ratio comparisons.

Some researchers point out that preventing managers from interpreting utility scores in a ratio sense is a minor victory, overshadowed by the fact that they will tend to do it anyway and the consequences are not all that terrible. There is probably some truth to that fatalistic viewpoint, but fortunately there is yet another approach that allows ratio comparisons.

Generic Sensitivity Analysis (no specific assumed competition)

Market simulations are the favored method for communicating strategic findings to managers from conjoint analysis. They are easy to interpret, since the results are scaled from 0 to 100. And, unlike part-worth utilities, simulation results (shares of preference) are assumed to have ratio scale properties (it's legitimate to claim that a 40 is twice as much as a 20, etc.). Market simulations offer a way to report preference scores for each level by way of sensitivity analysis.

The sensitivity analysis approach is based on the notion of how much we can improve (or make worse) a product's overall preference by changing its attribute levels one-at-a-time, while holding all other attributes at constant base case levels. We prefer to conduct sensitivity analysis for a test product versus relevant competition (as shown in the final section of this article), but if you cannot come up with a reasonable definition of competitive products for your study, you may assume no specific competition.

Generic Sensitivity Analysis Steps:

  • Use “Purchase Likelihood” model in the market simulator. (When using CBC, where you haven't asked a purchase likelihood question, the math is equivalent to a Share of Preference simulation that assumes the product concept is being compared to a single competitor of average preference.)
  • Specify an “average” base case product in the market simulator. We suggest using the “middle” level of preference for each attribute. For non-ordered attributes (like brand or color), choose the level closest to the average score for that attribute. For binary attributes (two levels, such as “channel blockout/no channel blockout”) choose level 1.5 (the interpolated value half-way between the two levels). For quantitative attributes (like price), choose the middle price. If there is an even number of levels, choose the interpolated level half-way between the two middle levels.
  • Run the simulation in Sensitivity Mode. Each level of each attribute will be systematically tested, where all other levels are held at the base case levels.
  • Chart the results in Excel, such as shown below.

The base case product for this sensitivity run was:

  • Brand: RCA (the brand closest to “average” within this attribute)
  • Screen Size: 26” screen (middle level)
  • Sound: Stereo sound
  • Channel Blockout: Level 1.5 (interpolated value half-way between the 2 levels)
  • Picture-in-Picture: Level 1.5 (interpolated value half-way between the 2 levels)
  • Price: $375 (level 2.5, the interpolated value between $350 and $400)

The first simulation within the sensitivity run computes the purchase likelihood of the JVC brand when combined with all other levels in the base case specification. We record that purchase likelihood result, then repeat this process for all levels in the study.

Sensitivity Analysis (given specific competition)

Rather than consider the relative preference for the attribute levels when compared to a generic product, we suggest you consider their strengths when associated with a specific product concept facing a given set of existing competitive products. How would this be different from the generic case? As an example, if no competitors currently offer “picture-in-picture” capability, the benefit of offering that feature is greater than if multiple competitors currently are offering that feature. Also, if the same people who like Sony also tend to desire picture-in-picture, Sony will get an incremental benefit when including this feature. Rather than use the Purchase Likelihood model, we can use the default Randomized First Choice simulation method (which has similarities to the First Choice rule and Share of Preference). The key here is to have a base case product (typically, your client's current product specifications) along with competitive products (typically, your client's main current competitors).

Let's assume your client is Sony and the current base case competitive landscape is as follows:

Sony 25” Screen Surround Sound No Blockout Picture-in-Picture $400
RCA 27” Screen Stereo Sound No Blockout Picture-in-Picture $350
JVC 25” Screen Stereo Sound No Blockout No Picture-in-Picture $300

If we repeat the sensitivity analysis, this time modifying Sony's features one-at-a-time (holding RCA and JVC constant) the results are as follows:

At the base case (Sony, 25” screen, surround sound, no channel blockout, picture-in-picture, $400), Sony captures 33% relative share of preference. The chart above shows the new share of preference if Sony were to modify its existing product to have other specific levels.

Obviously, Sony cannot change its brand to RCA or JVC, so the first attribute is irrelevant. The potential improvements to Sony's product can be ranked:

  • Add Channel Blockout (48 relative preference)
  • Reduce price to $350 (45 relative preference)
  • Increase screen size to 26” (39 relative preference)

Although it is unlikely that Sony would want to reduce its features and capabilities, we can also observe the loss in relative preference by including levels of inferior preference. One of those is Price. Increasing the price to $450 results in a new relative preference of 25.

Of course, you will eventually want to do more sophisticated what-if analyses than varying each attribute one-at-a-time. But, this simple approach provides a good way to summarize the relative preferences for the levels within your study. Also, we caution the reader regarding the common practice of converting utility values to monetary equivalents (also known as “willingness to pay analysis”). You can read more on this subject in “Assessing the Monetary Value of Attribute Levels with Conjoint Analysis: Warnings and Suggestions” in our Technical Papers library at