The R-Sq shown in ACA_OLS is a very simple regression based on just one independent variable and one dependent variable. Let's say the respondent was given just 3 Calibration Concept tasks. We first compute OLS utilities across all the ACARAT and ACAPAIR questions, creating individual-level utilities (we don't report the r-squared for that regression step). Next, we compute the total utility (from that first step) of each concept shown to respondents in the calibration concept step. Put that in one column (with three rows in this case, where I'm referring to just 3 Calibration Concept questions). Then, do a logit transform (log-type transformation of the respondent's answers...I won't give details here) for the Calibration Concept ratings. Place those values in the second column as the dependent variable. Then, regress the dependent variable on the independent variable (also fit the intercept). It's not uncommon to be able to get an R-squared that rounds to 1.0 for that (and historically we have multiplied by 100 when reporting that statistic).

Respondents who get negative betas in this last calibration step are marked with a 0 r-squared (even though the r-squared was certainly greater than zero). We do this to flag them, because their calibration concept responses are in a pattern that is opposite of the pattern that their earlier ACA responses would predict. The more calibration concepts you have asked (say, 6 or more), the less likely that r-sq will be 1000 for respondents. But, typically quite a few respondents just don't understand the Calibration Concept step in ACA and give illogical answers.

I'm not a big fan of Calibration Concept questions for ACA. The main reason you need them is to be able to predict purchase likelihoods in the market simulator (an "iffy" procedure, since respondents are so bad at guesstimating their purchase likelihoods). My preferred practice in general is to not ask the calibration concepts and spend those extra moments asking a few more Pairs questions. Then, I use ACA/HB estimation to obtain better utilities than OLS. Also, the "Exponent" typically needs to be tuned a bit higher than 1.0 within the market simulators to give better fitting probability predictions of choice shares. (Good to have two or three holdout choice tasks within the same survey, so you can tune the magnitude of the Exponent to best fit choice scenarios).

That is exactly the answer to my question!!!

I did what you recommend: I downloaded my data again from the web surver and put them into the SSI Web Lab and then did an ACA/HB estimation instead of an OLS.

I am right, that I can handle these individual utilities the same than these estamiting with the OLS, am I?

I put these individual utilities into SPSS for an linear regression. So the output of the ACA/HB is basically the same than with OLS but a little bit more exact - is that right?