Different Average Utilitys from .HBU file and SMRT

Hi
I am running a CBC/HB study and I've got the following average utilities from averaging the individual utilities in the .HBU file:

Attribute 1:   5,94    0,65    -6,59   
Attribute 2:   4,42    -0,44    -3,98   
None option: 11,55

However, when I run SMRT (using HB estimation) I get the following average utilities:

Attribute 1:   60,13    1,43    -61,55   
Attribute 2:   42,90    -7,48    -35,42
None option: -218,06

The point is: how do I get such a different utility for the none option? (it never happened before!). As you can see, if I rank the concepts, In the first case the None has the highest utility (11,55 > 5,94 + 4,42)  but in the second it has the lowest of all possible combinations.
The transformation that SMRT makes (I always thought it multiplied by a constant), would give the same relative positions, right...?

In this particular study the None option is critical...

Thank you in advance
asked Dec 9, 2011 by anonymous
edited Dec 9, 2011 by Bahadir Ozkurt
218 > 60+42  isn't it?
Sorry! Is  -218.06!
ah, corrected it now.

that indeed sounds weird. although I do not use SMRT, I suspect it should produce results that are inline. were there any constraints etc in one analysis that you may have not created in SMRT?
I included constraints in HB estimation but I don't need to include them again in SMRT.
What's in fact even weird is that relative importances and share of preference is ok (is gives exactly the same using HB raw data or SMRT). Only the average utilities seems not working...
I'd need to take at your .hbu file and import it into SMRT to see how SMRT is rescaling (Zero-centered Diffs) to make sure what's going on.  The rescaling of the None parameters should be done by multiplying each respondents' utilities by a constant "expansion factor."
Hi Bryan
Thank you
I've sent the file to the support email

1 Answer

0 votes
The problem had to do with rescaling utilities of "bad" respondents. This was the first time it happened but I think this is an important validation to do in every study where rescale is used.

Here are the steps that I used to find it:

1 – Rescale the individual utilities from the .hbu file to give the utilities used in SMRT using Zero Centered Diffs Rescale Method:

Calculate a Rescale Ratio for each respondent [Rescale Ratio =100/sum of ranges of attributes*number of attributes (not considering the none option)]

Multiple each level (plus the none option) by the Rescale Ratio

2 – Analyze the utilities from each respondent.

The problem was in two respondents whose utilities were extremely close, giving a sum of ranges of attributes below 1. This gave a huge Rescale Ratio that was multiplying with a negative None Option, distorting the average utility to absurd values. This happened because these two respondents had chosen always the same answer in all the choice tasks (clearly they were very bad cases, fact that was also confirmed by their low RLH).


In conclusion, to avoid this type of problem, one should first check whether the sum of ranges of attributes is below or above 1. If they are below, they probably should be excluded from the analysis.
answered Dec 14, 2011 by Marta (140 points)
...