The CBC logit estimation uses a multinomial logit model(?) - I was wondering how the standard errors are clustered?

What is the equivalent in HB? And how are standard errors clustered there?

Thanks in advance and kind regards!

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

The CBC logit estimation uses a multinomial logit model(?) - I was wondering how the standard errors are clustered?

What is the equivalent in HB? And how are standard errors clustered there?

Thanks in advance and kind regards!

Could you please clarify what you mean by "clustered"? Do you just mean "calculated"? Or, are you referring to something else?

I am referring to how standard errors are calculated.

Usually it is assumed that the observations are independent. Since this assumption is very strict (and not realistic in case of a CBC?), standard errors are sometimes "clustered" so that they allow for intragroup correlation; that is, the observations are independent across groups (clusters) but not necessarily within groups.

I assume that a "natural group" would be to cluster standard errors within one respondent (who makes several choices) and that this is done in HB estimation(?). But how is this handled in the SSI's Logit estimation? And, e.g., how do you account for potential heteroscedasticity?

Usually it is assumed that the observations are independent. Since this assumption is very strict (and not realistic in case of a CBC?), standard errors are sometimes "clustered" so that they allow for intragroup correlation; that is, the observations are independent across groups (clusters) but not necessarily within groups.

I assume that a "natural group" would be to cluster standard errors within one respondent (who makes several choices) and that this is done in HB estimation(?). But how is this handled in the SSI's Logit estimation? And, e.g., how do you account for potential heteroscedasticity?

0 votes

We assume independence in the choice tasks for our pooled logit. We don't cluster like you mention.

For HB, because most users of our software (practitioners in industry) are accustomed to Frequentist statistics, we take the simple route of computing the standard deviation across the posterior means of the draws for each respondent. In other words, we create a single point estimate vector of betas for each respondent. This is NOT the Bayesian way of doing things, for sure. It probably understates the true standard errors.

However, our HB software does produce a variance-covariance matrix for the alphas (the population means estimates), which would be more appropriate to examine for Bayesians. Better yet, you can run histograms and compute deciles on the draws of alpha (after convergence has been assumed) that are made available to you in an alpha draws file by our software.

For HB, because most users of our software (practitioners in industry) are accustomed to Frequentist statistics, we take the simple route of computing the standard deviation across the posterior means of the draws for each respondent. In other words, we create a single point estimate vector of betas for each respondent. This is NOT the Bayesian way of doing things, for sure. It probably understates the true standard errors.

However, our HB software does produce a variance-covariance matrix for the alphas (the population means estimates), which would be more appropriate to examine for Bayesians. Better yet, you can run histograms and compute deciles on the draws of alpha (after convergence has been assumed) that are made available to you in an alpha draws file by our software.

...