Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Best practices for constructing hold-out tasks?

I'm curious as to the best practices for constructing hold-out choice tasks in a CBC exercise.  

For example, we know it's ideal to use a separate, new sample for the hold-out tests.  But what about the ideal profiles to test?  I'd think that for client satisfaction, one ought to make at least one hold-out profile match a key client product of interest.  But might it also be good to include the modal (i.e., most popular) configuration?  Or how about an extreme, high-priced (or unattractive in other ways) profile?   ...Or perhaps it's best to show each hold-out respondent an entirely different, random profile (sort of like what's created in CBC)?  ...And is it best to discard the first one or two hold-out profiles to account for burn-in?

If one wants to measure the accuracy of one's CBC estimates, what are the best practices for testing this?
asked Apr 5, 2012 by BJJ (120 points)

1 Answer

+2 votes
Great questions, and I like your thinking.

I've been doing holdout CBC tasks for years, and probably the biggest mistake we made early on was to create holdout tasks with minimal overlap--meaning that levels didn't ever repeat within the choice task.  But, in the real world, products often share at least some characteristics.  And, holdout tasks with substantial level overlap are often more difficult to predict than minimal-overlap holdout tasks.  So, they are a more stringent test of our models.

One of the biggest questions is to ask: "Do I really need holdout choice tasks?".  From an academic perspective, holdouts are very nice, since they allow us to test our models, test different versions of the models, and assess respondent consistency.  From a practical standpoint, they allow us to test realistic product scenarios and competitive situations, and demonstrate to others that the models actually do a good job predicting a new situation that wasn’t involved in the model building.

But, from a practical standpoint, holdout choice tasks and holdout respondents seem like a large additional cost to impose on data collection and analysis.  Are those costs worth it for the given situation?
answered Apr 5, 2012 by Bryan Orme Platinum Sawtooth Software, Inc. (146,540 points)
...