Although ACA has proven to be a useful and popular technique over more than three decades, many researchers have argued that the self-explicated importance section in the ACA "Priors" may be a weak link. The self-explicated importances can be confusing to some respondents, and the tendency for respondents to state that most attributes are important may flatten the final derived importances. Flattened importances mean that the effect of attributes in market simulations is biased toward greater similarity across attributes. As a result, ACA users sometimes report that critically important attributes tend to carry too little weight, and attributes of very little consequence tend to carry too much weight. This problem is exacerbated when there are very many attributes in the study.
At the 2004 Sawtooth Software conference, we reported on an experiment in which we dropped the importance questions altogether. The experiment was conducted within an actual commercial study and included 20 total attributes and around 1500 respondents. (See "The 'Importance' Question in ACA: Can It Be Omitted?" available within our technical papers library at http://www.sawtoothsoftware.com/support/technical-papers). We tested two methods of obtaining prior attribute importance information for the purposes of selecting attribute combinations to display in the conjoint pairs:
•Assume equal prior attribute importances
•Assign prior importances based on the average derived importances from previous respondents
We should note that in either case updating of part-worth utilities occurred per the standard practice for ACA: once conjoint pairs had been answered by the respondent, this respondent's answers were added to his/her prior information, and the updated part-worth utility estimates were used for selecting the next conjoint pair(s).
We used ACA/HB to estimate final part-worth utilities, where the utilities were constrained by the within-attribute level ranking/ratings but the importances were not constrained. Therefore, the only information used for deriving the relative strength of attributes came from the conjoint pairs section. We found that dropping the importance questions...
•improved the prediction of shares for holdout choice sets.
•resulted in degraded predictions of individual choices (hit rates),
•resulted in more differentiated and different derived importances,
•reduced the interview time.
•And, we found that both methods of dropping the importances worked about equally well.
The margin of victory in predicting holdout shares would have been even greater had we used the time savings from dropping the self-explicated importance section to ask even more pairs. Most researchers and managers are more interested in the accuracy of aggregate share predictions rather than individual-level classification rates. And, if a key deliverable is a summary of respondents' preferences using part-worth utility or importance charts, dropping the importance questions should result in more valid displays. Therefore, there are advantages to dropping importances. The advantages come, however, at the expense of a bit larger sample size to stabilize the results compared to standard ACA. Also, we recommend using the saved time to ask additional pairs questions.
We repeated the study in a different context (international study, with 10 attributes) and confirmed the findings of the first study. Based on these two favorable outcomes, we are offering the ability to drop the importance question in the commercial version of ACA. We hope that providing this added functionality will encourage further research in this area.
Using Importances from Prior Respondents
We should note that in the second experiment referenced above we improved the method for using prior group importances for selecting the design in the conjoint pairs (relative to that used in the first experiment). Rather than use the previous respondents' group importances as a definitive rank order for arranging the attributes in the cyclical plan used for the first-stage 2x2 pairs, we assigned the rank order according to random draws from a rectangular distribution, such that the probability of an item achieving a high rank-order position was proportional to its prior importance score. This leads to greater variety in the conjoint pairs design (especially in the first pairs showing two attributes at a time, where the attributes are arranged in a circle, from best to worst), while still using information regarding the relative strength of the attributes for selecting paired product concepts with a reasonable degree of utility balance. Especially in cases where the researcher is taking a subset of the attributes into the pairs section, selecting the subset of most important attributes with likelihood proportional to prior importances ensures that even least important attributes (based on the prior group average) will still have a chance to be evaluated in the pairs section. Since part-worth estimation is performed using HB, the respondents that evaluate least important attributes in the pairs will contribute useful information for refining the estimates of these part-worths across the population.
Once respondents have completed the interview, their final part-worth utilities (OLS estimation) are used to derive importance scores, and those importance scores are used to update the previous population importances (each respondent equally weighted). Lighthouse Studio writes a file containing updated group importances named STUDYNAMEavgimp.cgi on the server during data collection. This file is reset (to equal average importances) if the Reset Web Survey is selected in the admin module.
Generally, when you can share information across respondents on the same server (or CAPI installation) we'd recommend using the method of assuming prior importances based on previous respondents. We think that based on the improvement to the algorithm, this method will prove slightly better than assuming equal importances for priors. We again stress that if dropping the importance section, much of the time savings should be devoted to asking additional pairs questions.
When you omit the importance questions and use ACA/HB for estimation, the appropriate setting is to fit Pairs only (fitting Priors is inappropriate), and to not use prior importances as constraints. However, even if you did try to use the importances as constraints, we write out importances that are equal for all respondents such that no between-attribute importance constraints would apply.