Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Why do I get different results when re-running the HB analysis?


I have a ACBC Analysis containing 161 samples. Everytime I run the HB analysis I get differnt importances and utilities. Is this variation normal? For example, for one attribute, I get the overall importance of sometimes 1,64 and sometimes 0,04. If I segment into male respondents (n=129) and female respondents (n=32) I get importances for the attribute of 1,62 (male) and 0,00162 (female) which would give an average of 1,3.

Is this normal? I am relatively new to conjoint analysis, so please forgive me if the answer is obvious.

Thank you very much for your help!
asked Oct 31, 2015 by Matt

1 Answer

0 votes
Dear Matt,

If you use the same Starting Seed (this is a setting on the "Estimation Settings" dialog within the SSI Web software in the HB estimation area, then you should obtain the same result every time (as long as all the other settings in your HB run are the same).  A Starting Seed needs to be an integer such as 1, 2, 3, etc.

But if you use a Starting Seed = 0, then this tells the software to use a random seed each time, and your results will be slightly different each time.

Importance Scores are scaled on a 0 to 100 scale, so the difference in importance score between 1.64 and 0.04 is not very large in absolute magnitude.  It seems to show an attribute with very little or no importance.  

Let me know if for some reason using the same starting seed (1 or larger) and all the same settings in the HB dialogs lead to different results each time.  I don't think this should be happening.
answered Oct 31, 2015 by Bryan Orme Platinum Sawtooth Software, Inc. (132,290 points)
The starting seed is set to 1 and it still appears to be slightly different results everytime I rerun the HB analysis. It also happens for attributes with importances to have 41% instead of 39% for example. There I get 40.40% for the whole conjoint analysis and if I segment to male and female, both importances are higher than the overall importance. Could I maybe send you a file to check if some of the settings are wrong?
Please tell me what version of the SSI Web software you are using.  I assume you are estimating HB utilities for ACBC from within the SSI Web interface.

Then, we will double-check to make sure the seeding is working right.  It seems to be behaving for you as if it is using a different random starting seed each time maybe?

Regarding the tabulation of male vs. female where the importances for both groups of respondents is bigger than the importances for the total sample together, which reporting function are you using to see that?

Remember, if you compute HB only within a subgroup (e.g. run HB only for females) and then run HB only for males, there will be some randomness but also a different upper-level (population parameters) introduced in the process compared to running HB on the total sample.
I am using SSI Web 8.3.10, should I upgrade the version? And how can I check if it uses a random starting seed everytime?

For the male vs female example, I used the hb_reports in excel automatically created by the programme.

What do you mean by different upper level?

Thank you very much for your help!
Dear Matt,

I've talked to the main software developer for ACBC and HB estimation within ACBC and we are stuck.  We aren't aware of any bug that would cause this behavior.

Here's another thought: when you re-run HB analysis, the software will tell you that it already has done a certain number of iterations and will ask you if you want to restart the HB run using those previous iterations as a starting point?  If you say "yes", then indeed you will get a slightly different result each time you re-run HB estimation (because you will be starting from a different starting point each time).  But, if you say "no" then the HB estimation will start from the beginning each time, using the specific starting seed specified in the HB interface.

But, if the issue directly above is not the case, the...

So we can investigate this further and help you, please zip up your SSI Web Project Folder contents and email it to me at "bryan at Sawtooth Software dot com".  

With your project files, we'll see if we can reproduce the error where you report that the utilities are different each time your run the HB analysis
Dear Bryan,

thank you very much for your answer! I reran the conjoint analysis several times now starting from 0 everytime and now I get the same results everytime I run it which means this question is solved.

But what did you mean by a different upper-level (population parameters) and randomness for the segmentation? How can I describe this in a statistically correct way when I try to interpret and compare the results for differnt segments?

Thank you very much again for your help!
I was just meaning that you cannot run All Sample within HB and then compare to Males Only HB appended below Females Only HB.  If you run Males and females separately, then Males are influenced (via Bayesian smoothing toward the average sample means and covariances) toward the male respondents' info whereas females run separately in HB would be influenced toward the average sample means and covariance for females.  Running HB on the entire sample means each respondent is influenced by the overall means and covariances for the total sample.  So, you can see how the results could be slightly different.