Sawtooth Software: The Survey Software of Choice

Using Utility Constraints to Improve the Predictability of Conjoint Analysis

Conjoint analysis derives utilities (part-worths) to represent respondent preferences for product attributes. Some attributes such as price or quality have definite a priori order. Since utility estimates contain random error, and respondents are fallible, we often observe utilities which seem to violate common sense--especially those calculated at the respondent level. For example, a respondent's utilities might suggest that he prefers to pay higher prices, or desires lower quality. We sometimes call these anomalies "reversals."

Once we've identified reversals, the next step is to decide how to handle them. One school of thought suggests ignoring reversals, since they typically are ironed out in the aggregate, and they add a degree of random behavior to market simulations which may in some cases be valuable for predicting aggregate real-world behavior. After all, buyers don't always behave rationally in the real world. Another alternative is to impose order constraints. Researchers have suggested a variety of ways to impose constraints which range from simple tieing strategies to complex and computationally intensive algorithms.

The most simple way to deal with reversals is to "tie" values that are reversed. CVA's ordinary least squares utility calculator has a tieing algorithm. Non-parametric techniques such as CVA's monotone regression and LINMAP impose order constraints while solving for part-worths. Recently, a computationally intensive Bayesian method using the Gibbs sampler has been proposed for imposing order constraints on conjoint data (Allenby et al. 1995).

It is also possible to impose utility constraints across attributes. If we have prior knowledge that one attribute is more important than another for a given respondent, we can impose a constraint. However, we restrict our discussion to within-attribute utility constraints, since they are generally more applicable to most conjoint data sets.

Both full-profile conjoint methods and ACA can display utility order reversals. The bulk of opinion and research suggests that reversals are less likely to occur in ACA and that utility constraints are less likely to improve the predictive validity of ACA utilities.

Why ACA Utilities Are Less Susceptible to Reversals

The "priors" in ACA are largely responsible for the lower incidence of reversals for ACA data.

Moore et al. (1994) state, "...respondents rank order (or the researcher rank orders for the respondent) the levels of each attribute in the self-explicated stage. This rank ordering does not impose a constraint, but this information is incorporated into the regression . . . These rank orders, which are consistent with a priori reasoning, should lessen the tendency for estimated utilities to be out of order."

Summary of Findings

A number of researchers have shown that utility constraints can significantly improve the predictive validity of full-profile conjoint utilities. In some instances, constraints can also modestly improve predictability for ACA. However, the improvement is rarely statistically noteworthy.

The two tables below summarize findings for studies we are aware of that have examined these issues.

Table 1
Holdout Prediction Hit Rates for Full Profile Methods
. Unconstrained Constrained Ratio
Srinivasan et al. (1983) 74% 82%a 1.11
Van der Lans et al. (1992) 68% 71%b 1.04
Moore et al. (1994) study 1 57% 63%c 1.11
Moore et al. (1994) study 2 61% 68%c 1.11
Herman and Klein (1995) study 1 60% 65%d 1.08
Herman and Klein (1995) study 2 70% 78%d 1.11
Orme et al. (1997) 63% 68%e 1.08
AVERAGE RATIO 1.09
a Constrained using LINMAP
b Constrained using MORALS
c Constrained using monotonic regression
d Constrained using non-metric mathematical programming
e Constrained using tieing rule
Table 2
Holdout Prediction Hit Rates for ACA
 
. Unconstrained Constrained Ratio
Van der Lans et al. (1992) 73% 73%a 1.00
Moore et al. (1994) study 1 59% 61%b 1.03
Moore et al. (1994) study 2 54% 56%b 1.04
Johnson and Pinnell (1995) 83% 83%c 1.00
Orme et al. (1997) 58% 59%d 1.02
AVERAGE RATIO 1.02
a Constrained using alternating least squares
b Constrained using monotonic regression
c Constrained using Bayesian technique and Gibbs sampler (Allenby et al. 1995)
d Constrained with tieing rule

As shown in Table 1, the average improvement in predictive validity for full-profile was 9%. Table 2 shows that constraints improve the predictive validity of ACA by an average of 2%. Of the studies in Table 2 that examined constraints with ACA, Moore et al. found the largest improvements for ACA at 3% and 4%. Regarding imposing utility constraints for ACA, Moore et al. conclude, "The small increase in the ACA validations argues against the use of this procedure with ACA."

A Dissenting Opinion

The May 1995 Journal of Marketing Research included an article by Allenby, Arora and Ginter (hereafter, AAG) entitled, "Incorporating Prior Knowledge into the Analysis of Conjoint Studies." AAG reported that prohibiting sign reversals in ACA resulted in significant improvements. AAG proposed an interesting new method using the Gibbs sampler to estimate constrained part worths. They "held out" the last three pairs from an ACA interview for external validation. AAG measured performance with a mean squared error measure using draws from the posterior distribution of model parameters, finding the median MSE for the held out pairs to be about 4.60 for standard ACA estimates and 3.52 when constraints were imposed. AAG concluded that imposing utility constraints via the Gibbs sampler had improved the quality of ACA utilities.

Another Look at AAG's Findings

Johnson and Pinnell (1995) examined the same data in terms of the more commonly accepted validation measure of holdout hit-rates. AAG provided their part worth estimates, for both standard and Bayes methods. Johnson and Pinnell found that hit rates for the held out pairs were 95% for standard ACA and 86% for Bayes (t=10.99). Constraints had actually been harmful to prediction. The data set also included four holdout choice tasks that AAG had not considered. Hit rates for those additional holdouts were 83.2% for standard ACA and 83.3% for Bayes. The difference is not significant (t=.36). Johnson and Pinnell concluded that the Bayes method for imposing utility constraints had not significantly improved the predictability of ACA utilities.

It is important to note the AAG imposed order constraints for attributes such as brand (which do not have a universal order) based upon stated preferences from the priors portion of the ACA interview. We suspect that stated preferences might not always represent "truth" for every respondent. Some respondents may have been confused by the stated preference question, thus providing bad information for use in constraints. We expect that Bayesian methods may provide modest improvement for ACA data sets when used only for constraining strong a priori attributes and look forward to more evidence of their usefulness in the future.

Suggestions for Practice

We think it is reasonable to correct reversals for attributes with strong a priori ordering no matter the conjoint method. Our full-profile system (CVA) lets the researcher prescribe order constraints under either OLS or monotone regression. The CBC system can impose order constraints only under the Latent Class add-on module.

ACA is less susceptible to reversals than full-profile methods, but reversals still can occur. The current version of ACA influences, but does not strictly constrain, utility orders. We may include such constraints in future releases. For the time being, ACA users should be aware of the issue and examine their data sets. Counting reversals by respondent can provide an additional data point beyond the "correlation" recorded in the utility file for judging respondent reliability. You may find it useful to discard the most unreliable respondents. For those cases that remain, simply tieing offending levels, if there are any, can be a simple yet effective remedy.

References

Allenby, Greg M., Neeraj Arora, and James L. Ginter (1995), "Incorporating Prior Knowledge into the Analysis of Conjoint Studies," Journal of Marketing Research, (May), 152-62.

Herman, Steve and Rob Klein (1995), "Improving the Predictive Power of Conjoint Analysis," Marketing Research, (Fall) Vol. 7 No. 4, 29-31.

Johnson, Richard M. and Jonathan Pinnell (1995), "Comment on "Incorporating Prior Knowledge into the Analysis of Conjoint Studies," Working Paper, Sawtooth Software, Sequim, WA.

Moore, William L., Raj B. Myhta and Teresa M. Pavia (1994), "A Simplified Method of Constrained Parameter Estimation in Conjoint Analysis," Marketing Letters 5:2, 173-81.

Orme, Bryan K., Mark Alpert and Ethan Christensen (1997), "Assessing the Validity of Conjoint Analysis--Continued," Working Paper, Sawtooth Software, Sequim, WA.

Srinivasan, V., Arun K. Jain, and Naresh K. Malhotra (1983), "Improving Predictive Power of Conjoint Analysis by Constrained Parameter Estimation," Journal of Marketing Research, (November), 433-38.

van der Lans, Ivo A., Dick R. Wittink, Joel Huber and Marco Vriens (1992), "Within- and Across-Attribute Constraints in ACA and Full Profile Conjoint Analysis," Sawtooth Software Conference Proceedings, 365-79.