Indeed, ACA reports a "correlation coefficient", which is really the R-squared (agreement) between the utilities estimated in the first sections of ACA (self-explicated priors + conjoint pairs) and the ratings made in the final section (Calibration Concepts). (Assuming you are using the default OLS utility estimation routine, rather than the better ACA/HB routine).

But, since Calibration Concepts are really just a few observations (typically 4 to 7 ratings of purchase intent for those last product concepts shown to each respondent), if the respondent got fatigued, bored, or otherwise didn't engage very well in those last few purchase intent questions, they could get a very bad R-squared reported.

And, it turns out that respondents were sometimes pretty bad at giving responses in those last few Calibration Concept questions (the 100-point purchase intent scale). So, the R-squared reported by the default OLS estimation was not always a very good indicator of whether a respondent overall was bad or not. Sorry.

But, if you are using the superior ACA/HB utility estimation routine (an add-on tool from us), the fit that is reported is an R-squared from the HB regression across the Conjoint Pairs (the core part of the ACA survey). I suspect that the R-squared reported via ACA/HB is probably a bit more reliable than the R-squared reported by the OLS routine for judging the quality of respondents.

So, it shouldn't surprise you that there are no rules of thumb on interpreting an appropriate cutoff for throwing out an ACA respondent. In general, I would not just use just one criterion for throwing out an ACA respondent. But, maybe you could look at four criteria:

1. Total time to complete the interview (look for speeders especially)

2. Straightlining or other bad behavior in non-ACA questions

3. Straightlining in the ACA Conjoint Pairs questions (you'll need to export the data to examine this per respondent)

4. The R-squared reported by preferably ACA/HB...or you might decide to use the R-squared reported by the default ACA OLS estimation.

thanks a lot for your detailed reply - it definitely helps to interpret the values! I can see now that I should not base my evaluation of the sample on the R-squareds alone.

However, I'm still not completely sure about the calculation of the R-squareds themselves (in standard OLS estimation of utilities). If I get it right, its a pearson correlation between the purchase intent scores and the sum of utilities per calibration concept. Thus, if I had five calibration concepts with six attributes each, I would sum up the final utilities for the six attribute levels included in a each of the calibration concepts and correlate them to the purchase intent scores, which would imply a correlation based on five pairs of values (then square the result to get the variance explained). Right?

Sorry to bother you with these details - I just feel much more comfortable in interpreting results when I know how the values come about!

Thanks again,

Daniel