Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

minimum amount of tasks per respondent for HB estimation

Dear Sawtooth Software Team

I have read through many of your instructions and technical papers, however I couldn't find an answer as to what is the minimum amount of tasks required per respondent for reliable individual level (HB) estimation. I have seen a maximum of 20 being recommended for CBC and (not sure if from Sawtooth or in a Journal paper) a maximum of 16 for MBC. I have also seen your rules of thumb for CBC and MBC regarding sample size, where tasks per respondents (TPR) also show up in the formula. I am, however, specifically interested in a rule or at least rough pointer of TPR required to make an HB estimation still feasible, for both, CBC and MBC. Do you have any insights on this?

Best Regards
Matthias Laube
asked Jul 31, 2013 by anonymous
retagged Jul 31, 2013 by Walter Williams

1 Answer

0 votes
Sorry, this is a tough, tough question.  It depends on so many things, most especially in CBC:

- Number of attributes and levels per attribute in your study
- How many concepts you are showing per task
- Whether standard None is used (and how often respondents are clicking the None option) or Dual-Response None
- Whether first choice, chip allocation, or best-worst question format is used
- Whether respondents are answering using simple decision rules or more complex (compensatory tradeoff) decision rules
- And, it depends at least a little bit on sample size (since HB "borrows" information across respondents to stabilize estimates for each individual).

Please don't think there is some maximum of 20 recommended for CBC and 16 for MBC.  While those seem like nice guidelines, it really depends on complexity of the survey, respondent motivation (incentives), etc.  

Also, please be very careful about the "rule of thumb" involving "NTC >= 500".  That formula was developed back prior to HB estimation, assuming analysis was only being conducted using aggregate counting or aggregate logit.  So, that rule of thumb for sample size and number of tasks per respondent doesn't as directly apply to our modern-day situations involving HB.

I wish there were a good formula or rule of thumb for recommending the number of tasks and concepts per task to ask per respondent to lead to stable HB estimation.  This requires a lot of experience, and depends greatly on the number of parameters you intend to estimate in the model (number of attribute levels in your study, and depending on if you need to estimate interaction effects).

All that said, many researchers lately are feeling that for standard CBC studies (involving about six attributes, each with about 2 to 5 levels) when showing 3 to 4 concepts per task, asking first choices only, and not requiring interaction effects...about 10 to 12 tasks is all that is needed for stable HB estimation.  But, please be careful with this generalization!  It can easily fall apart if some assumptions are violated.
answered Jul 31, 2013 by Bryan Orme Platinum Sawtooth Software, Inc. (136,165 points)
Thanks! I am aware of limitations like respondent motivations and attributes, I just thought there might be a rule like "If X attributes then Y tasks per respondent are needed". Unfortunately, this seems not to be the case. The CBC study in question used an alternative specific design, 3 attributes with 8 to 10 levels and four price attributes with 7 levels each. 4 concepts per task plus none-option, discrete choice, main effects only. 12 tasks per respondent were asked, regarding your answer I suppose that's enough?

About MBC, any rough guidelines there? The MBC study used 9 binary attributes and asked six tasks per respondent.

Sample size should not be an issue with roughly 1400 respondents each.