Our conjoint software systems provide a number of outputs for analyzing results including: utilities (or counts), importances, shares of preference and purchase likelihood simulations. This article discusses these measures and gives guidelines for interpreting results.
Before focusing on conjoint data, we'll review some fundamentals for interpreting quantitative data. The definitions below are adapted from Statistics for Modern Business Decisions, Fourth Edition, by Lawrence L. Lapin.
The Nature of Quantitative Data:
There are four general types of quantitative data:
- Nominal data are those wherein the numbers represent categories, such as 1=Male, 2=Female; or 20=Italy, 21=Canada, 22=Mexico. It is not appropriate to perform mathematical operations such as addition or subtraction with nominal data, or to interpret the relative size of the numbers.
- Ordinal data commonly occur in market research in the form of rankings. If a respondent ranks five brands from best "1" to worst "5," we know that a 1 is preferred to a 2. An example of an ordinal scale is the classification of the strength of tornados. A category 3 tornado is stronger and more damaging than a category 2 tornado. It is generally not appropriate to apply arithmetic operations to ordinal data. The difference in strength between a category 1 and 2 tornado is not necessarily equal to the difference in strength between a category 2 and a 3. Nor can we say that a category 2 is twice as strong as a category 1 tornado.
- Interval data permit the simple operations of addition and subtraction. The rating scales so common to market research provide interval data. The Celsius scale also is interval scaled. Each degree of temperature represents an equal heat increment. It takes the same amount of heat to raise the temperature of a cup of water from 10 to 20 degrees as from 20 to 30 degrees. The zero point is arbitrarily tied to the freezing point of distilled water. Sixty degrees is not twice as hot as 30 degrees, and the ratio 60/30 has no meaning.
- Ratio data permit all basic arithmetic operations, including division and multiplication. Examples of ratio data include weight, height, time increments, revenue and profit. The zero point is meaningful in ratio scales. The difference between 20 and 30 kilograms is the same as the difference between 30 and 40 kilograms, and 40 kilograms is twice as heavy as 20 kilograms.
Conjoint utilities from ACA, CBC, CVA and ICE are scaled to an arbitrary additive constant within each attribute and are interval data. The arbitrary origin on the scaling within each attribute results from dummy coding in the design matrix. For example, in CBC, logit utilities are scaled to sum to 0 within each attribute. A plausible set of utilities for miles per gallon might look like:
30 MPG -1.0 40 MPG 0.0 50 MPG 1.0
Just because 30 MPG received a negative utility value does not mean that this level was unattractive. In fact, 30 MPG may have been very acceptable to all respondents. But (all else being equal) 40 MPG and 50 MPG are better. The utilities are scaled to sum to 0 within each attribute, so 30 MPG must receive a negative utility value. (ACA, CVA and ICE utilities are also scaled to an arbitrary additive constant, but the sum of attribute utilities is not always equal to 0.)
Suppose we have two attributes with the following utilities:
Blue 30 Brand A 20 Red 20 Brand B 40 Green 10 Brand C 10
The increase in preference from Green to Blue (20 points) is equal to the increase in preference between Brand A and Brand B (also 20 points). However, due to the arbitrary origin within each attribute, we cannot directly compare values between attributes to say that Red (20 utiles) is preferred equally to Brand A (20 utiles). And even though we are comparing utilities within the same attribute, we cannot say that Blue is three times as preferred as Green (30/10). Interval data do not support ratio operations.
CBC "counts" the number of times an attribute level was chosen relative to the number of times it was available for choice. In the absence of prohibitions, counts proportions are closely related to conjoint utilities. If prohibitions were used, counts are biased. Counts are ratio data when compared within the same attribute. Consider the following counts proportions:
Blue 0.50 Brand A 0.40 Red 0.30 Brand B 0.50 Green 0.20 Brand C 0.10
We can say that Brand A was chosen 4 times as often as Brand C (.40/.10). As with conjoint utilities, we cannot report that Brand A is preferred to Red.
Conjoint importances are determined by percentaging the differences in utility between the best and worst levels of attributes. When using ACA and CVA, importances should be calculated at the individual-level rather than from average utilities, unless every attribute has a fixed a priori order. When calculating importances from CBC data, we suggest using utilities resulting from Lclass (with multiple segments) or the ICE Module, if there are attributes on which respondents disagree about preference order.
Importances are ratio data. An attribute with an importance of 20 (20%) is twice as important as an attribute with an importance of 10.
Shares of Preference:
All of our conjoint systems offer share of preference simulations. When two or more products are specified in the market simulator, we can estimate what percent of the respondents would prefer each product. Shares of preference are ratio data. Even so, we recognize that both the exponent (scaling multiplier) and the simulation model used can dramatically affect the scaling of shares of preference. A product which captures twice as much share as another in a first choice simulation (or using a large exponent) may capture considerably less than twice the share using the share of preference (probabilistic) model.
During ACA, CVA or ICE interviews, respondents may be asked to rate individual products on a 0 to 100 point purchase likelihood scale. This is very helpful to gauge respondent interest in the product, and for scaling the data for use in purchase likelihood simulations. Once we have scaled conjoint data to reflect purchase likelihoods, we can predict how respondents would have rated any combination of attributes included in the study in terms of purchase likelihood.
Purchase likelihoods should not be considered as strictly ratio data. A respondent may not truly be twice as likely to purchase a product he rated a 50 versus another he rated a 25. Even so, it is quite common to state that a product with a purchase likelihood of 55 represents a 10% relative increase in purchase likelihood over a product which received a 50.