Sawtooth Software: The Survey Software of Choice

Staying Out of Trouble with ACA

Though we've been told that ACA is remarkably easy to use, we talk nearly every day with an ACA user who has run into a problem of some kind. We thought it might be helpful to list the problems responsible for the most frequent customer support calls. Each could be the subject for an essay, but we'll spare you, and just list the problems with a few words of explanation for each.

Using too many prohibitions: ACA lets you specify that certain combinations of attribute levels shouldn't occur together in the questionnaire. But if you prohibit too many combinations, ACA won't be able to produce a good design, and may fail altogether. You can present combinations of levels that do not exist in the market today, and including unusual combinations can often improve estimation of utilities. Prohibitions should be used sparingly.

Calculating importances using average utilities: When possible, attribute importances should be computed for each individual and then averaged, rather than calculated using average utilities. When based on average utilities, an attribute that is important to everyone, but about which people disagree, can turn out to appear unimportant. (See an accompanying article, "The Basics of Interpreting Conjoint Utilities.")

Doing complex conjoint studies by phone: Conjoint questionnaires are often difficult for respondents, who must keep many things in mind at the same time. ACA has been used successfully in many phone studies, but it's best when the subject matter is simple and the interview is short. We suggest you limit phone studies to 10 or fewer attributes, three or fewer levels per attribute, and two attributes per paired concept.

Reversing signs of ordered attribute levels: If you already know the order of preference of attribute levels, such as for quality or price, you can inform ACA about which direction is preferred and avoid asking respondents those questions. But you can also misinform ACA about the preferred levels, which can lead to data almost impossible to salvage. To avoid this situation, take the interview yourself, making sure that the questions are all reasonable (neither member of a pair dominates the other on all included attributes). Also, answer the pairs section with mid-scale values and then check to make sure the utilities are as you expect them.

Using ACA for pricing research when not appropriate: There are three aspects to this point.

  1. All "main effects" conjoint methods, including ACA, assume that every product has the same sensitivity to price. This is a bad assumption for many product categories, and CBC may be a better choice for pricing research, since it can measure unique price sensitivity for each brand.
  2. When price is just one of many attributes, ACA may assign too little importance to it. In a Sawtooth News article, Jon Pinnell reported that it may sometimes be appropriate to increase the weight that ACA attaches to price. This is particularly likely if the author includes several attributes that are similar in the minds of respondents, such as Quality, Durability, and Longevity. If redundant attributes like these are included, they may appear more important in total than they should be, and other attributes, such as price, may appear less important than they really are.
  3. It is not a good idea to use ACA's (or CBC's) "correction for product similarity" with quantitative variables such as price. Suppose there are five price levels, and all products are initially at the middle level. As one product's price is raised, it can receive a "bonus" for being less like other products which more than compensates for its declining utility due to its higher price. The result is that the correction for product similarity can lead to nonsensical price sensitivity curves.

Using unequal intervals for continuous variables: If you use the ranking rather than the rating option, ACA's prior estimates of utility for the levels of each attribute have equal increments. That works well if you have chosen your attribute levels to be spaced regularly, for example with constant increments such as prices of .10, .20, .30, or proportional increments such as 1 meg, 4 megs, or 16 megs. But if you use oddly structured intervals, such as prices of $1.00, $1.90, and $2.00, ACA's utilities are likely to be biased in the direction of equal utility intervals.

Including too many attributes: ACA lets you study as many as 30 attributes, with up to 9 levels. But that doesn't mean anyone should ever have a questionnaire that long! Many of the problems with conjoint analysis occur because we ask too much of respondents. Don't include n attributes when n-1 would do!

Including too many levels for an attribute: Some researchers mistakenly use many levels in the hope of achieving more precision. ACA can only study 5 levels in detail, and when there are more than 5 levels, ACA must make assumptions about the others. With quantitative variables such as price or speed, you will have more precision if you measure only 5 levels and use interpolation for intermediate values.

Abuse of unacceptables: ACA lets you include an "unacceptables" section, in which respondents are permitted to identify features so unattractive that products with those features would never be considered. Those attribute levels are excluded from the balance of the interview. Unacceptables provide a way to shorten interviews that would otherwise be too long, but respondents are too willing to discard levels as "totally unacceptable." We suggest avoiding use of unacceptables. (Ask us for a copy of an article on unacceptables by Noreen M. Klein).

Wording of null levels: Some attributes have levels of "present" and "absent." One hopes the "absent" level will serve only as a contrast to the "present" level, rather than having a negative intrinsic effect. One approach is to avoid using the word "absent" altogether, representing that level with a neutral symbol such as a dash or a period.

Interpreting simulation results as "market share": Conjoint simulation results often look so much like market shares that people sometimes forget they are not. Conjoint simulation results seldom include the effects of distribution, out-of-stock, or point-of-sale marketing activities. Also, they presume every buyer has complete information about every product. Researchers who represent conjoint results as forecasts of market shares are asking for trouble.

Not including adequate attribute ranges: It's usually all right to interpolate, but usually risky to extrapolate. With quantitative attributes, include enough range to describe all the products you will want to simulate.

Imprecise attribute levels: We assume that attribute levels are interpreted similarly by all respondents. That's not possible with "loose" descriptions like "10 to 14 pounds," or "good looking."

Attribute levels not mutually exclusive: Every product must have exactly one level of each attribute. Researchers new to conjoint analysis sometimes fail to realize this, and use attributes for which many levels could describe each product. For example, with magazine subscription services, one might imagine an attribute listing magazines respondents could read, in which a respondent might want to read more than one. An attribute like that should be divided into several, each with levels of "yes" and "no."

Misinterpreting a specification of zero in simulations: ACA's simulator lets you specify a product's value for an attribute as zero, meaning: "don't include this attribute for this product." But to use a specification of zero correctly, you must also use zeros for the other products. Researchers sometimes assume in error that zero means "not available."

Assess the impact of product line extensions inappropriately: All logit-based simulators have trouble with products that are very similar to one another. If two products are especially similar, most conjoint simulators will give them more share than they deserve. This presents a problem for trying to assess the impact of a line extension, which probably shares many characteristics with a current product. One way to approach this problem is to use ACA's "correction for product similarity." Another is to "fool" the simulator by including two versions of every product, where the product with the line extension has two somewhat different products, but others each have two identical entries.

Insufficient Memory: ACA Version 4 requires 550K of free memory for questionnaire authoring, although less memory is required for interviewing.

An Important Difference Between ACA Versions 3 and 4: Some users of Version 3 have been disturbed when upgrading to Version 4, when finding that "correlation" values suggest that utilities predict responses to the calibration concepts less well. Much of this difference is due to a change in how goodness of fit is reported. In Version 3 we reported the correlations between predictions and actual responses, and in Version 4 we report the squares of the correlations. Thus, a correlation of, say .7 would be reported as an r-squared of .49 in Version 4. There are many other differences between Versions 3 and 4 as well, which are documented in the "ACA V4 Technical Paper" which may be downloaded from our Internet home page (http://www.sawtoothsoftware.com).