Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Mean betas in HB do not stabilize after 1,000,000 iterations

Hello everybody,


I conducted an ACBC study with 65 respondents. The ACBC included 7 attributes plus a summed price attribute. 6 of the attributes had 3 to 8 levels which were all included in the conjoint.
However the levels of the 7th attribute were defined by a constructed list. For this attribute each respondent had 5 out of 11 possible attributes in the conjoint, based on other questions.

Unfortunately, one of the 11 possible attribute levels wasn´t included in the conjoint for any respondent, meaning there was no data collected about it at all.

Conducting the HB analysis for this ACBC now leads to problems. In my understanding a good estimation can be recognized by relatively constant mean betas for each parameter after an appropriate number of iterations.

The corresponding graph for a HB estimation with 20,000 iterations (recommended as a minimun in the SSI Helps) at first looked acceptable to me.

However, out of curiosity I started another analysis with 500,000 iterations. The graph then looks highly concerning to me. One of the lines (I assume that it´s the one belonging to the 7th attribute described above) fluctuates strongly also crossing other lines.

Even conducting 1,000,000 iterations did not solve the problem. In the erstimation settings I choose "Inferior to included levels" for the missing levels.

Do you have experience with this kind of ACBC results? Any hint on how to deal with the 7th attribute would be highly appreciated.

Two possible solutions come to my mind:
1. Set missing levels to "Unavailable". Here I am struggling with the effects this has and with the choice of an appropriate "Unavailable" Part-Worth Value.
2. Is there any chance to exclude the level from the ACBC/ HB?


Best regards,

Jonathan
asked Sep 10, 2018 by JoGu (250 points)

1 Answer

0 votes
MCMC estimation is said to 'converge in distribution' to the posterior distribution, which has noise it in, so if it never converges to a specific value, that is not an indication of non-convergence. So seeing noise is not inherently a problem.

Non-convergence takes the form of a upward or downward trend and that would be a sign of something problematic about the model. Formally, what we're looking for in convergence is a "stationary distribution" (https://en.wikipedia.org/wiki/Stationary_process)
answered Sep 10, 2018 by Kenneth Fairchild Bronze Sawtooth Software, Inc. (3,720 points)
...