Analysts sometimes add holdout questions to their conjoint surveys to test the way they have specified their models. Holdouts can also be used for out-of-sample validation. The author, Keith Chrzan, uses synthetic CBC respondents to test how many holdouts are needed to identify a true vs. a misspecified model reliably. The misspecified models were manipulated to involve known small, medium-sized, and large errors. Under medium-sized and large errors, just a handful (about 5) holdout tasks seems to provide enough data to reliably indicate that the true model is better than the misspecified one. Under relatively small misspecification error, no number of holdouts tested (up to 15) can reliably point to the correct model. Chrzan’s findings demonstrate that just 1 or 2 holdouts is probably not enough to give practitioners enough evidence to compare competing models, but 5 or more will often bring enough evidence to reliably point to which of two competing models provides the best fit.