Specifying Fixed or Holdout Tasks

Top  Previous  Next

CBC lets you specify a single or multiple "fixed" tasks.  Fixed refers to the fact that every respondent is shown the same choice task, with the product concepts defined in exactly the same way.  You must define your own fixed tasks; CBC does not design them for you.  (By default, all fixed tasks are initialized to level "1" for each attribute.)


Most CBC users will opt for a randomized design, since they are quite efficient, automatic, and permit great flexibility in analysis.  Some CBC users with design expertise may choose to implement a fixed design (consisting of a single or multiple blocks), which is most easily done by importing the design from a .csv file.  A fixed design can be slightly more efficient than a randomized design in measuring the particular effects for which it was designed.


For most CBC users we recommend using randomized tasks for part-worth estimation and specifying one or more fixed holdout tasks that are not used for utility estimation.  We think it is wise to include holdout choice tasks in conjoint interviews, even though they may not appear to be needed for the main purpose of the study.  They almost always turn out to be useful, for these reasons:


They provide a proximal indication of validity, measured by the utilities' ability to predict choices not used in their estimation.




They permit identification and removal of inconsistent respondents (if using HB).



They can be used for testing specific product configurations under consideration.  Much value can be added by direct measurement of these concepts.



They can be used for testing the accuracy of market simulators.  They aid considerably in comparing alternative models (logit, Latent Class, or HB) and choice simulation strategies.  (Note: if comparing the ability of different models to predict holdout choices, it is important to adjust the scale parameter to maximize the fit of each model prior to making comparisons.)



If holdout concepts have been defined with differing degrees of product similarity, they can be used for tuning the appropriate correction for product similarity in Randomized First Choice modeling.


It's hard to design good holdout concepts without some prior idea of respondent preferences.  There's no point in asking people to choose among concepts where one dominates in the sense that most everyone agrees it is best.  And, similarly, it's good to avoid presenting concepts that are equally attractive, since equal shares of preference would be predicted by a completely random simulator.  If you present triples of concepts, it's probably best if their shares of choices are somewhere in the neighborhood of 50/30/20.


When conducting CBC studies, if you plan to do segmentation with latent class analysis, it's wise to consider the kinds of groups you expect to get and to design products in holdout choice sets so that one alternative will be much more preferred by each group.  


If you plan to use the Randomized First Choice simulation model, it is helpful to include holdout tasks that reflect severe differences in product similarity.  For example, in a holdout choice set featuring four product alternatives, two products might be identically defined on all except one or two attributes.  By including products with differing similarities, the appropriate adjustment for product similarity can be tuned in the Randomized First Choice Model.


It isn't necessary to have many holdout sets to check the general face validity of your utilities, but if you want to make relatively fine comparisons between competing models then you should use at least five holdout tasks and preferably more.  Also, if you want to use holdout choices to identify and eliminate inconsistent respondents, you need several choice sets.


Finally, if you do have several choice sets, it's useful to repeat at least one of them so you can obtain a measure of the reliability of the holdout choices.  Suppose your conjoint utilities are able to predict only 50% of the respondents' holdout choices.  Lacking data about reliability, you might conclude that the conjoint exercise had been a failure.  But if you were to learn that repeat holdout tasks had reliability of only 50%, you might conclude that the conjoint utilities were doing about as well as they possibly could and that the problem lies in the reliability of the holdout judgments themselves.


Analyzing Holdout Concepts


If you have specified fixed holdout choice tasks within the CBC questionnaire, you can analyze the results by exporting the responses under the File | Data Management area.


Some researchers repeat choice tasks to achieve a measure of test-retest reliability.  This type of analysis often is done at the individual level.  If you plan to analyze holdout choice tasks at the individual level, you should export the data for analysis using another software program.

Page link: http://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?hid_web_cbc_designs_5.html