The market simulator (choice simulator) offers five models:
|2.||Share of Preference|
|3.||Randomized First Choice|
This section provides a brief introduction to the models used in the choice simulator. More detail is provided in the section entitled: "Technical Details for Simulations."
This option is the simplest and is sometimes referred to as the "Maximum Utility Rule." It assumes the respondent chooses the product with the highest overall utility. The results for this option are invariant over many kinds of rescalings of the utilities. In particular, one could add any constant to all the levels for an attribute and/or multiply all part-worth utilities by any positive constant without affecting the shares for the simulated products.
The First Choice model requires individual-level utilities, such as those generated by CBC/HB, ABC/HB, ACA, or CVA. The First Choice model should not be used with Latent Class or Logit runs for CBC.
The First Choice model is very intuitive and simple to implement. Its principal strength is its immunity to IIA difficulties (red-bus/blue-bus problem). In other words, the First Choice rule does not artificially inflate share for similar (or identical products). This property is especially important for product line simulations or situations in which some product offerings are quite similar to others in the competitive set.
Its principal weakness is that the share of preference results are generally more extreme than the other simulation models and one cannot adjust the steepness of the model using the exponent multiplier. We have seen evidence that the First Choice model's predictions can often be more extreme than market shares in the real world — especially for low involvement purchases.
Another weakness is that it reflects information only about the respondent's first choice. Information about the relative preference for the remaining products in the simulation is ignored. As a result, standard errors for the First Choice model are generally higher than with the other models offered in the choice simulator. Sample sizes need to be larger for First Choice modeling than the other approaches to achieve equal precision of estimates.
We recommend using the First Choice model if you have large sample sizes and have determined through holdout choice validation or, preferably, through validation versus actual market choices that the First Choice model accurately predicts shares better than the other approaches. The First Choice rule may also be a reasonable option if the respondents in your simulator actually reflect multiple "draws" per respondent from HB (e.g. each respondent is listed 100x in the data set, each replication representing a different "draw" from the distribution characterizing the respondent's preferences).
Share of Preference Model
The Share of Preference model uses the logit rule for estimating shares. The product utilities are exponentiated (the antilog) and shares are normalized to sum to 100%. A helpful saying to remember is the antilog of logit-scaled utilities is proportional to choice likelihood.
The Share of Preference models result in "flatter" scaling of share predictions than the First Choice model. In general, we expect that this flatter scaling more closely matches what occurs in the real world. The Share of Preference models capture more information about each respondent's preferences for products than the First Choice method. Not only do we learn which product is preferred, but we learn the relative desirability of the remaining products. This means that standard errors of share predictions are lower than the First Choice shares.
The Share of Preference model is subject to IIA and can perform poorly when very similar products are placed in competitive scenarios (e.g. line extension simulations) relative to other less similar items within the same set. If using CBC under aggregate logit simulations, the IIA problem is intensified. Under Latent class, the problem is somewhat reduced. With individual-level utility models (CBC/HB, ACBC/HB, ACA, or CVA), the problem is greatly reduced, but nonetheless can still be an issue.
The Top N option available within the Share of Preference model provides a way to modify the Share of Preference rule (only allocating share among the Top N products within the market simulation, where N is an integer defined by the researcher) to potentially reduce IIA problems and utilize more information than the First Choice simulation rule.
Randomized First Choice
The Randomized First Choice (RFC) method combines many of the desirable elements of the First Choice and Share of Preference models. As the name implies, the method is based on the First Choice rule and can be made to be essentially immune to IIA difficulties. As with the Share of Preference model, the overall scaling (flatness or steepness) of the shares of preference can be tuned with the Exponent.
RFC, suggested by Orme (1998) and later refined by Huber, Orme and Miller (1999), was shown to outperform all other Sawtooth Software simulation models in predicting holdout choice shares for a data set they examined. The holdout choice sets for that study were designed specifically to include product concepts that differed greatly in terms of similarity within each set.
Rather than use the part-worth utilities as point estimates of preference, RFC recognizes that there is some degree of error around these points. The RFC model adds unique random error (variation) to the part-worth utilities and computes shares of choice in the same manner as the First Choice method. Each respondent is sampled many times to stabilize the share estimates. The RFC model results in a correction for product similarity due to correlated sums of errors among products defined on many of the same attributes.
The RFC model is very computationally intensive, but with today's fast computers speed is not much of an issue (unless you are doing intensive searches using the Advanced Simulation Module). It usually takes only a few moments longer than the faster methods to perform a single simulation. According to the evidence gathered so far on this model, we think it is worth the wait. The RFC model is appropriate for all types of conjoint simulations, based on either aggregate- or individual-level utilities.
When using the Randomized First Choice model, we recommend you turn off the correction for similarity (application of correlated error) for any price attributes. This will avoid strange kinks in derived demand curves. There is also a good argument that price is a very different type of attribute that should not require correction for product similarity.
The most complete use of the RFC model requires tuning the appropriate amount of attribute- and product-level error. By default, only attribute-level error is used in the simulator and our experience so far with multiple data sets is that this default settings works very well. This setting assumes no product share inflation for identical offerings. If you have questions regarding tuning the RFC model read the section covering the details of RFC or read the technical paper entitled "Dealing with Product Similarity in Choice Simulations," available for downloading from our home page: http://www.sawtoothsoftware.com.
Note: By default, a correction for similarity (correlated attribute error) is applied to all attributes not marked as "price" attributes; but the user can specify that certain additional attributes should not involve a correction for similarity. We recommend you remove the correction for similarity for any Price attribute. You do that from the My Scenario Settings tab under the Simulation Method Settings... icon on the Simulation Settings ribbon group.
Purchase Likelihood Model
The purchase likelihood model estimates the stated purchase likelihood for products you specify in the simulator, where each product is considered independently. The likelihood of purchase projection is given on a 0 to 100 scale.
If you intend to use the Likelihood of Purchase option in the Market Simulator, your data must be appropriately scaled based on stated purchase likelihood. The following estimation methods result in data appropriate for the purchase likelihood option:
|1.||ACA, if calibration concepts (where respondents indicate purchase intent on a 0-100 scale) have been asked and used in utility estimation.|
|2.||CVA, if single-concept presentation was used (where respondents indicate purchase intent on a rating scale), and the logit rescaling option used with OLS regression.|
|3.||CBC/HB, if calibration concepts have been asked and the Tools + Calibrate Utilities program (from the CBC/HB standalone program) is used to rescale the utilities.|
Any other procedure will result in simulations that are not an accurate prediction of stated purchase likelihood. Keep in mind that the results from the Purchase Likelihood model are only as accurate as respondents' ability to predict their own purchase likelihoods for conjoint profiles. Experience has shown that respondents tend to exaggerate their own purchase likelihood.
You may use the Purchase Likelihood model even if you didn't scale the data using calibration concepts, but the results must only be interpreted as a relative desirability index. Meaning: a value of "80" is higher (more desirable) than a value of "60," but it doesn't mean that respondents on average would have provided an 80% self-reported likelihood of purchase for that particular product.
The purchase likelihoods that the model produces are not to be interpreted literally: They are meant to serve as a relative gauge or "barometer" of purchase intent. Under the appropriate conditions and discount adjustments based on past experience (calibration), stated intentions often translate into reasonable estimates of market acceptance for new products.
This is really not a market simulation method, but a way to display the total utility for each of the products in your market scenario, where the total utility is equal to the sum of the part-worth utilities.