Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

How to avoid dominating product with cheapest price (ACA, level ranking of attributes)?

Hi,

I'm quite confused that although we got level ranking in ACA there come up pair questions where the best product has the cheapest price.

Example:
We got one product attribute
   Basic product
   Basic product + feature 1
   Basic product + feature 1 + feature 2
with a clear "worst-to-best" ranking.

And we got prices: from 100$ to 120$ in steps of 5 (best-to-worst ranking).

As we really  might like product feature 1 but about feature 2 we don't care a lot. So we almost always prefer product 2 to product 1 and just when the price difference is little we prefer product 3 to product 2 (but most of the time we still prefer product 2 to product 3).

This is what happens during the pair section (10 pages): almost always there come up pair questions where the best product has the cheapest price. E.g.:
   Basic product + feature 1                      Basic product + feature 1 + feature 2
   Price: 120$                                             Price: 110$

This really disturbed a lot of our testers. So how can we avoid this (we already have chosen the level ranking options for both attributes)?

Thanks in advance,
Samuel
asked Feb 19, 2013 by rossam (160 points)
retagged Feb 19, 2013 by rossam

1 Answer

0 votes
This question has come up at least 30 to 50 times over the nearly 20 years I've been here at Sawtooth Software.  The most common reasons are:

1.  You've got your attribute a priori order settings wrong in your attribute setup.  Please triple check that.

2.  The testers are answering essentially randomly, meaning that they are throwing a lot of random error into the process and then the updating regression informs ACA that the respondent no longer believes that lowest prices are better than highest prices, etc.  To keep this from happening as much during testing, have your testers answer the middle answer in the ACA Pairs questions.  This should greatly reduce the occurrence of the dominated pairs in ACA.

Finally, it's important to recognize that probably about 5% of the time, even with a good respondent who is answering rationally and with low error (and assuming you don't have a problem with the a priori settings for your attributes!), you will get what appear to be dominated pairs.  That occurs because ACA re-runs the regression analysis after each pair is answered to update the part worth utilities.  Then, levels are arranged on the screen in the next pair according to current estimates of preferences.  So, with limited information per respondent, the part worth utilities have some noise and intermediate updating steps can actually suggest that the respondent thinks that higher prices are better than lower prices.  Now, as the respondent provides more and more data (and if the data are good), the likelihood that dominated pairs occur should go down.

The good thing is that when the dominated pair is presented on the screen, the respondent can quickly answer the question in the extreme side of the scale, thus quickly righting the ship and getting ACA back on the path to estimating good part worths.  So, such dominated pairs are not wasted questions.
answered Feb 19, 2013 by Bryan Orme Platinum Sawtooth Software, Inc. (134,015 points)
Please double-check that ACA is the appropriate conjoint method for your research.  Over the last 15 years, ACA has fallen out of favor for pricing research studies.  CBC and ACBC are usually the preferred instruments now.

ACA has some weaknesses for pricing research:

1.  It often understates the importance of price (understates the price elasticity).  The degree of understatement increases as:

a) the number of attributes increases in ACA (especially 10 or more!)
b) the range of the price attribute increases (making it a priori by far the most important attribute in your attribute list).  Partial-profile conjoint tends to decrease the differences in importance between attributes, relative to full-profile.  Plus, the Importance rating question in ACA can make it hard for respondents to discriminate enough between killer important attributes and very low importance attributes.
c) the importance question is asked not very optimally (see the manual for suggestions about asking the importance question well).  One recommendation is to use ACA/HB analysis, large sample sizes, and dropping the Importance question altogether.

2) ACA cannot estimate interaction effects between attributes, such as between brands and prices.  This means that for each respondent, the price function is constant across brands.  (When individual-level respondent data are placed in a market simulator, the resulting behavior of the market choices isn't necessarily constrained to common price function choice behavior across brands...but it still can be a weakness in the model.)

Some researchers have included additional CBC-looking holdout choice tasks in their surveys so that they can post hoc adjust the importance of price in ACA to help reduce some of the known weaknesses.  

Please see these related papers:

https://www.sawtoothsoftware.com/download/techpap/priceaca.pdf

https://www.sawtoothsoftware.com/download/techpap/omitimp.pdf
Thank you Brian for your extensive answer!

And sorry for my late response. I think I understand all the points you mention. Finally, the (test-results and the field-) results look good, it's just some irritating during the survey (in the survey we mentioned that this could happen later on, so we would keep the irritation lower).

Actually we like ACA because it breaks down kind of complicated insurance products into pieces and this makes the decision for survey participants easier. We sometimes skip the rating questions or put it in an alternative way. And yes we always use ACA/HB for the analysis.

We often find the price understated so we use to add a van Westendorp question. This actually combines pretty good (though people sometimes just have no idea about insurance prices and the price ranges differ widely). We might try though a short CBC sample or generally use more CBC in the future.

Best regards,
Samuel
...