Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Different results in Hierarchical bayes and OLS with CVA

I am doing a analysis of 210 individuals who completed a survey that included 20 CVA single project object ratings exercises with a 9 point scale. There are 5 attributes with 3-4 levels per attribute. I calculated individual importance scores using both HB and OLS. I am most interested in individual importance scores, but in first looking at the average importance scores, I saw the following (below). Order of importance changes, as well as most changing by at least 3 points. Does this mean that OLS is really no good and I need to use HB? Or is something else going on? Or, since I am most interested in individual scores, should I just be focusing on HB anyway. I am not that familiar with HB but have been reading all the documentation in the technical papers, but have not found a direct answer as to why they should be so different, as it seems my sample size should be sufficient for OLS to work okay. Advice? Any would be appreciated!

                            CVA/HB Run Summary Results        CVA OLS Run Summary Results       
Out of pocket costs                          27.11772  (16.98366)    26.16202     (21.35036)   
Reduction in mortality                  16.99001  (8.74393)        13.94394     (11.75825)   
Health care provider involvement  24.68593    (12.94929)    28.28493     (15.60481)   
False positive    rate                        15.98017    (10.17522)    12.87093     (12.84357)   
Access                                          15.22617    (7.28184)             18.73818 (9.73036)
asked May 14, 2012 by anonymous
retagged Sep 1, 2016 by Walter Williams

1 Answer

0 votes
I don't like focusing ones attention on importance scores, which are not as useful or meaningful as part-worth utilities or especially market simulations.

Importances are kind of a strange calculation based on the maximum difference observed between best and worst levels for an attribute, at the individual level.  But, if there is an attribute with little impact for a respondent, then just random noise for the utilities of levels for that attribute will drive a positive importance score.  "Reversals" (levels out of order from expected rational preferences) still are counted as a positive weight toward the importance score.

HB tends to reduce reversals at the individual-level, and should give a cleaner view of attribute impact, in my opinion.

But, you could also estimate your utilities by imposing utility constraints (monotonicity constraints) for either OLS or HB.  And, that should also sharpen the importance calculation.

Again, I'm not a fan of importances.

And, the differences you are noting seem relatively small, to my eyeball.  

Attributes 1 and 3 seem about tied, and attributes 2, 4, and 5 seem about tied.  So, there could be some trading around of rank orders due to minor changes in the absolute importance scores.
answered May 14, 2012 by Bryan Orme Platinum Sawtooth Software, Inc. (169,815 points)