Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

How many is too many choice tasks?

I'd like to increase precision and to increase the frequency at which some levels are shown.

Current design is as follows:
Attribute A: Constructed list, bringing in 7 levels (out of 10 unique levels)
Attribute B: Constructed list of prices, bringing in 4-5 levels per respondent (out of 12 unique levels)
Attribute C: 4 levels

I absolutely require Attribute B's utilities to be measured with good precision to answer my key question.

From your previous experience, what is the highest number you can go with choice tasks without compromising on data quality?

Presumably, fatigue might depend on not only number of choice tasks but the number of attributes seen in each concept. So far, I've set it at 50 possible product concepts to bring in, so there are 25 choice tasks, and that achieves suitable level frequencies.
asked Feb 13 by yorkmr (450 points)

1 Answer

0 votes
You can always test the design as that will give you the strength of the design

Below are some guidelines after testing your design:

-Standard errors within each attribute should be roughly equivalent
-Standard errors for main effects should be no larger than about 0.05
-Standard errors for interaction effects should be no larger than about 0.10
-Standard errors for alternative-specific effects (an advanced type of design) should be no larger than about 0.10

https://www.sawtoothsoftware.com/help/lighthouse-studio/manual/

On the left hand side, you can always check the test design for a step-by step approach

Hope this information helps
answered Feb 14 by Hitesh_Kalwani Bronze (640 points)
Thanks! Yes, I definitely do test the design and check for the above. My question may not have been phrased clearly enough...it's probably more like, "What is the highest number you can go with choice tasks without compromising on data quality -- that is, without fatiguing participants too much?"

Certainly more choice tasks would increase precision, but fatigued participants might start to disengage after too many tasks. So my question has more to do with balancing precision with participant fatigue.

I guess another solution is to just increase n in general and to pilot in an in-depth way.
So it happens when you tend to have more number of CBC questions. In one of the cases we had 16 CBC questions. what we landed up doing was keeping filler questions in between so that they get a break from CBC tasks and other questions get answered as well. Eg: Gender/age question after 6th CBC question .. and other demographic questions after the 12th and so on ..
...