Scale Factor and Conjoint Results

As researchers gain proficiency in conjoint analysis, they should make sure to pay attention to the issue of Scale Factor. Respondents who are very consistent in their choices have part-worth utilities of larger absolute magnitude than inconsistent respondents. For example, consider two respondents, “Sloppy Sam” and “Consistent Carrie.”

“Sloppy Sam’s” Raw Utilities:

-0.5  Red
0.0  Green
0.5  Blue

0.5  Low Price
0.0  Medium Price
-0.5  High Price

“Consistent Carrie’s” Raw Utilities:

-3.0  Red
0.0  Green
3.0  Blue

3.0  Low Price
0.0  Medium Price
-3.0  High Price

The “raw” utilities above are the naturally-scaled values from utility estimation, as saved to the .HBU or .UTL files. Carrie’s utilities show the same relative pattern of preference as Sam’s; but each value is 6x the size (scale). As respondents are more consistent (respond with less noise), their scale goes up, and sometimes quite dramatically.

Does it make sense to say that Carrie prefers Blue a great deal more than Sam? (Her utility for Blue is 3.0 and his utility is 0.5). She is indeed more consistent in expressing her preferences via the conjoint questionnaire, but it isn’t necessarily the case that she prefers Blue a great deal more.

This highlights the danger of directly comparing raw part-worth utilities across respondents. Since our early versions of ACA in the 1980s, we have recognized this issue and chosen to summarize utilities using a method that normalizes the scale across respondents. We have also recommended that normalized utilities be used in subsequent cross-tab or cluster analysis. Our “zero-centered diffs” normalization gives respondents equal scale (in terms of sums of differences between best and worst levels). The normalized utilities for Carrie and Sam would be identical.

Please note that shares of preference are simulated using raw utilities (and shares are normalized to sum to 100 for each respondent). We do this because we think it probable that Carrie also is more attentive (less haphazard) than Sam in her real world choices. Her simulated share probabilities will be more extreme than Sam (her choice reflects greater certainty).

The average respondent error within the context of the conjoint exercise doesn’t necessarily reflect the amount of error in buyer’s actual choices. In fact, most researchers find that choices in the conjoint laboratory imply less error than purchases in the real world. In other words, the overall scale of conjoint utilities is frequently too high, and the resulting sensitivity of the simulator is magnified relative to actual buyer behavior. Researchers often find that they can improve the validity of simulations by “tuning down” the sensitivity (using the “Exponent” setting) to adjust for these differences. One needs good data and experience to do this, so it shouldn’t be done indiscriminately.