Sawtooth Software: The Survey Software of Choice

Effect of Multiple Tasks on Menu-Based Choice Results

This article is an excerpt from a new white paper on this topic, by the same title, available for downloading from our Technical Papers Library at www.sawtoothsoftware.com/education/techpap.shtml.

Menu-Based Choice (MBC) studies ask respondents to select from zero to multiple options from a menu, such as for restaurant choices or buying options on a new vehicle. A simple menu-based choice task is shown in Figure 1.

As respondents click (“buy”) features on the menu, the total price shown at the bottom increases. When respondents finish configuring what they would purchase, they click the Next button and move to the next question.

Among other things, researchers can use MBC exercises to gauge price sensitivity for each feature on the menu. For example, the prices for each menu item can be changed from respondent to respondent, and from task to task. For example, the price for Alloy Wheels could take on 4 possible values ($1500, $1750, $2000, $2500) much like levels in a conjoint analysis experiment. Unique price ranges (alternative-specific prices) can be specified for each of the items in the study. If the prices for features in the menu are manipulated in an uncorrelated fashion (such as using a randomized, near-orthogonal design), the researcher can estimate the price sensitivity for each feature independent of the others.

Effect of Task Order

Menu-Based Choice experiments have been described in the literature and at our conferences for almost 10 years now. A question that we have yet to see answered in these articles is whether a respondent’s answers to the first menu task are very different from the answers to later tasks. Each respondent could be asked to complete just one menu. But, researchers typically ask respondents to complete multiple menus (where each new menu reflects changes in prices, or other aspects). Do estimated price sensitivities or preferences for certain items on the menu shift from early to later tasks due to learning effects?

One of the handy aspects of randomized experiments is that (across respondents) we can examine the preferences (or Counts scores) for levels, aggregating choices for one task at a time. We can compare preferences from the first task to those from the second task, etc.

The notion that later tasks are different from earlier choice tasks has fueled some debate in the industry. Are the earlier tasks (perhaps just the first one) the most trusted? Or, are later ones more valid? We think the answer probably depends on the product category and buying situation. Many product categories involve a search process (such as via the Internet) wherein the buyer becomes aware of different prices offered by different brands, channels, and suppliers. Essentially, multiple choice scenarios are evaluated, and through iterations of search the buyer becomes more informed and knowledgeable about different prices, terms, and availability. Also, many product categories are purchased repeatedly on a periodic basis, where prices, brand availability, and product specifications may change over time. It would seem that repeated choice tasks might faithfully reflect these situations and be compatible with the idea of realistic learning behavior for many real-world purchases. Also, since MBC experiments typically require larger sample sizes than other conjoint methods to obtain adequate precision, asking multiple tasks can save a great deal of money in fielding costs.

Results of Two Menu-Based Choice Experiments

We recently conducted two MBC choice experiments as Internet surveys using our SSI Web package, with Western Wats (Opinion Outpost) panelists. The first one involved 681 respondents, and showed a fast-food menu (2006). Each respondent completed eight menu-based tasks, involving selections of value meals vs. a la carte options. Burgers, salads, fries, drinks, healthy sides, and desserts were on the menu. The second study involved 806 respondents making choices of options for new cars, collected in March 2010 (Figure 1).

When respondents are asked to complete multiple menu-based choice tasks in succession, we found that the first task takes about 50% longer than the second task. After about the third task, respondents are answering the menu-based tasks at a rate between double to triple the speed of the first task.

We also examined the percent of times different car options were chosen in each task (Figure 2).

There seems to be a slight upward trend for adding a Security System to the vehicle, and a very slight downward trend for the other options. But, the shifts in preference are not very large, suggesting a great degree of stability in terms of aggregate preferences for the options, across tasks.

A similar for the fast-food menu study (where a None option was offered) showed that None increased from about 5% in the first task to about 15% by the 8th task. The other items on the menu tended to trend lower, as None increased. Other than the None choice, the relative choices of the other items on the menu were relatively constant.

Regarding the increase in None in subsequent tasks, we think that when respondents learn from previous tasks that items can appear at lower prices, they likely became more selective in later menus and are more likely to reject the entire menu (if the prices shown for the desired items are viewed as too high).

We also examined price sensitivity, again using counting analysis. Each of the items on the menu was shown at four possible prices (similar to price levels in a conjoint analysis). We can count what percent of the time each item was chosen when shown at each of its prices. One can compute simple measures of average price sensitivity (elasticity) using those pseudo demand curves.

For both the car option study and the fast-food menu study, we found that our measure of price sensitivity increased dramatically from the first task through the fourth task, and then essentially stabilized through the eighth task.

The only variables that we changed in these two menus were the prices for the items, so we cannot distinguish whether the increase in price sensitivity is due to less error (and larger scale factor) or a true increase in price sensitivity from earlier tasks to later tasks due to learning effects. Even so, the implications for Menu-Based Choice studies are clear: if you use just one choice task (with no warm-up exercises), the slope of the estimated price curves will be much flatter than if using later tasks.

After respondents completed the menu-based choice questionnaire, we asked about their experience with the questionnaire. We found respondents generally enjoyed the experience, and tended to not find it terribly monotonous or boring. Perhaps the stated warm feelings were more a reaction to the subject matter (cars) than the questionnaire, so those results need to be interpreted with caution.

Final Thoughts

Much can be learned regarding respondent preferences within MBC using the simple method of Counts analysis. If more sophisticated analysis is required, models may be estimated leading to choice simulators. Approaches for doing this are described in the literature, and are referenced in our more complete white paper by this same title, available in our Technical Papers Library on the web.

We plan to continue our investigation of methods for analyzing MBC, as menu-based experiments seem to offer some unique benefits for many kinds of market research problems, but can be more complex to analyze than traditional CBC.