Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

How is the coefficient of determination calculated in adaptive conjoint analysis?

Hi,

I was wondering how to judge data quality in ACA and came across the coefficent of determination (r2) as an indicator of internal validity of a respondent's data. However, I only found very general information on how this value is actually calculated form the study data and how it can be interpreted.

If anyone could provide some information on the coefficient's calculation in the context of ACA, that would be a great help. Relatedly, are there any rules of thumb on interpretation or cutoffs?

Thanks a lot!
asked May 9, 2014 by anonymous

1 Answer

0 votes
Indeed, ACA reports a "correlation coefficient", which is really the R-squared (agreement) between the utilities estimated in the first sections of ACA (self-explicated priors + conjoint pairs) and the ratings made in the final section (Calibration Concepts).  (Assuming you are using the default OLS utility estimation routine, rather than the better ACA/HB routine).

But, since Calibration Concepts are really just a few observations (typically 4 to 7 ratings of purchase intent for those last product concepts shown to each respondent), if the respondent got fatigued, bored, or otherwise didn't engage very well in those last few purchase intent questions, they could get a very bad R-squared reported.

And, it turns out that respondents were sometimes pretty bad at giving responses in those last few Calibration Concept questions (the 100-point purchase intent scale).  So, the R-squared reported by the default OLS estimation was not always a very good indicator of whether a respondent overall was bad or not.  Sorry.

But, if you are using the superior ACA/HB utility estimation routine (an add-on tool from us), the fit that is reported is an R-squared from the HB regression across the Conjoint Pairs (the core part of the ACA survey).  I suspect that the R-squared reported via ACA/HB is probably a bit more reliable than the R-squared reported by the OLS routine for judging the quality of respondents.

So, it shouldn't surprise you that there are no rules of thumb on interpreting an appropriate cutoff for throwing out an ACA respondent.  In general, I would not just use just one criterion for throwing out an ACA respondent.  But, maybe you could look at four criteria:

1.  Total time to complete the interview (look for speeders especially)
2.  Straightlining or other bad behavior in non-ACA questions
3.  Straightlining in the ACA Conjoint Pairs questions (you'll need to export the data to examine this per respondent)
4.  The R-squared reported by preferably ACA/HB...or you might decide to use the R-squared reported by the default ACA OLS estimation.
answered May 9, 2014 by Bryan Orme Platinum Sawtooth Software, Inc. (128,265 points)
Dear Bryan,

thanks a lot for your detailed reply - it definitely helps to interpret the values! I can see now that I should not base my evaluation of the sample on the R-squareds alone.

However, I'm still not completely sure about the calculation of the R-squareds themselves (in standard OLS estimation of utilities). If I get it right, its a pearson correlation between the purchase intent scores and the sum of utilities per calibration concept. Thus, if I had five calibration concepts with six attributes each, I would sum up the final utilities for the six attribute levels included in a each of the calibration concepts and correlate them to the purchase intent scores, which would imply a correlation based on five pairs of values (then square the result to get the variance explained). Right?

Sorry to bother you with these details - I just feel much more comfortable in interpreting results when I know how the values come about!

Thanks again,
Daniel
...