Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

# products in choice exercise vs # products in simulation

What is the recommended number of products to allow in simulation based off the number of products shown in the choice exercise?

For example if I show 10 products in the choice exercise then can I simulate 20 even though respondents never see that many products? I would guess it would be less accurate the further you stray away from # products shown in the exercise.
asked Aug 11, 2017 by anonymous

1 Answer

0 votes
In the 1970s, when card-sort conjoint was the norm, researchers didn't worry about this.  Only one card might be evaluated at a time (and a rating was given), yet the researcher might put multiple products in the market simulator.  (Of course there were problems getting the scale factor right...more on this below.)

In the 1980s, when ACA was the norm, only two products were ever shown at a time in the tradeoff questionnaires, and yet dozens could be specified at a time in the market simulations.  (Again, with problems getting the scale factor right...again, it's coming.)

With the use of CBC and proper MNL modeling (with its appropriate error theory), researchers have become more aware that context matters to choice.  There are many types of context effects, but one of the stronger ones is the effect of increasing the number of concepts per task on response error.  

One can tune the response error in the simulator by manipulating the "exponent" (the scale factor), to try to adjust for the fact that as the number of concepts grows, the response error increases.  (Tuning the exponent makes the resulting shares of preference flatter or more extreme.)  But, unless you've actually fielded CBC questions with different numbers of concepts shown, you don't know exactly the right scale factor to use under different conditions.

As a result, the general recommendation is to try not to stray too far in market simulations from the dimensions of the CBC tasks shown during data collection.
answered Aug 11, 2017 by Bryan Orme Platinum Sawtooth Software, Inc. (172,890 points)
Thank you Bryan! I appreciate the detailed response. To clarify- by testing many more products in the simulator than the # of products we show in the exercise, we are introducing more error in the estimation? Is there a recommended proportion to use (e.g. simulate no more then 1.5x # of products shown in the exercise)?
In general, as you add more concepts to the task, response error increases.  But, as you add more concepts to the task, statistical information also increases.  At some point, the gains in statistical precision are cancelled out by the losses due to response error.  Of course, this all depends on how many attributes you are showing per concept.  If showing just brand and price, the losses due to response error are delayed significantly, allowing many more concepts to be shown at a time with corresponding gains in precision.  But, again, this all depends...on the size of the screens respondents are using.

So, there are not very good guidelines that I can offer that I'm confident can apply across many situations.