The following article is adapted from “Comment on Wirth and Marshall et al.” by Sawtooth Software’s president, Bryan Orme, to be published in the forthcoming Sawtooth Software Conference 2010 Proceedings.
At the 2010 Sawtooth Software Conference, two speakers (Ralph Wirth, and Don Marshall) tested the bold assertions that Jordan Louviere made at the 2009 Sawtooth Software Conference. Louviere argued that traditional CBC models (where respondents pick the best alternative from sets) modeled with HB estimation were inferior and biased. He cited evidence from split-sample studies he had conducted. He proposed a new approach (Bottom Up) that collected more than just first choices for each CBC tasks and used purely individual-level estimation instead of HB. Jordan put us on notice in 2009, announcing that our common Top Down methods (i.e. HB) were “soooo WRONG,” and that Bottom Up methods were like an asteroid strike that would lead to species extinction.
Rather than ask for just first choice in each CBC set, Jordan’s Bottom Up approach asks the respondent to indicate the best concept in the set and the worst concept in the set. Then, respondents are asked if all of the concepts are acceptable, none are acceptable, or if some are acceptable.
Credit Where Credit Is Due
Jordan should be given credit for the many contributions to the field, especially his influential paper in 1983 that demonstrated to the marketing community the benefits and mechanics of discrete choice experiments. Jordan’s MaxDiff scaling was also a very useful invention. For these contributions and others, Louviere was awarded the 2010 Parlin Award.
Jordan has correctly argued that individual respondents shouldn’t be directly compared (on the utilities) without somehow accounting for scale differences. Sawtooth Software’s founder, Rich Johnson, recognized this issue as early as the 1970s. Since the 1980s, Sawtooth Software’s market simulators have summarized respondent utilities for reporting purposes after applying a normalization procedure. For each respondent, a normalizing constant is selected such that the sums of utility ranges across attributes are equal for each respondent. In our most recent simulators, this normalization procedure is called zero-centered diffs.
Sawtooth Software advocates using zero-centered diffs in tabulations when comparing groups and also in subsequent cluster analyses to find groups of similar respondents. But, raw utilities are used in the market simulator to project respondent choices.
Was Jordan Right in His 2009 Presentation?
Why was an entire session of the 2010 Sawtooth Software Conference dedicated to the subject of Bottom-Up vs. Top-Down methods? In Jordan’s 2009 presentation, he said the following:
- The world that you knew has changed & will never again be the same.
- Current choice models are WRONG!
- They are soooo WRONG, it’s hard to know why so many folks keep working on them.
- All published empirical results are WRONG & should be in the rubbish bin of failed science.
- Stop using these models NOW!
Jordan described his Bottom-Up approach as a game-changing asteroid event, akin to the massive global strike 60 million years ago that is argued to have led to the extinction of the dinosaurs. These were indeed bold assertions, that if correct, would have meant that those using traditional CBC and HB estimation were harming their clients and risked extinction.
Thanks to the Herculean efforts of Ralph Wirth, Joe Curry, Don Marshall, Siu-Shing Chan, Rich Johnson, Jordan Louviere, Bart Frischknecht, and John Rose, we now have substantial evidence that Jordan was not right.
Ralph Wirth conducted an extensive study using synthetic CBC data. His findings suggest that if you want to use Jordan’s questionnaire approach for CBC, there is no advantage to using the purely individual-level estimation over HB. Even when Wirth varied the error variance across respondents, he found no troubles for HB. The claim that HB estimation is biased and misleading seems unfounded. The recovery of known utility parameters was solid and unbiased for HB.
Marshall et al.’s two studies (the pizza and camera studies) suggest that for real respondents, Bottom-Up doesn’t do any better than traditional Top-Down CBC (for the camera data set, it was generally worse). But, Bottom-Up…
- Requires more data
- Takes much longer respondent effort
- More respondents drop out in BU
- More respondents are dissatisfied with the BU survey
- No commercial or open source software is available for BU
Jordan’s Bottom-Up approach has two major differences from Sawtooth Software’s standard CBC + CBC/HB approach. First, he collects more information from each choice task (best and worst concepts, plus a more complex None choice). Second, he analyzes the data using purely individual-level estimation rather than HB.
Jordan’s main assertion from 2009 was that HB estimation is biased and misleading. To test this claim, I used CBC/HB software to re-analyze the Bottom-Up respondent data for the camera questionnaire. To do so, I coded each choice task as a series of paired comparisons between concepts (“exploded rankings”). The HB run took just 50 minutes for all 600 respondents, even though the rank-order explosion resulted in over 100 choice tasks per respondent. The HB utilities outperformed the purely individual-level estimation. This clearly demonstrates that (holding the data constant) HB provides better results than the purely individual-level estimation that Louviere implemented in this round of research.
Is There Value in Best-Worst CBC?
For a few years now, some researchers have advocated asking respondents to identify both the Best and Worst concepts within each choice task (B-W CBC). Three papers at this conference (Chrzan et al., Wirth, and Marshall et al.) have presented evidence that asking respondents to identify the worst concept in addition to the best concept can actually improve predictions of best-only holdout choices. Up until this conference, I had been skeptical of the value of asking for worst choices within CBC tasks.
Given the evidence presented at this conference, we plan to provide an option for asking B-W choices in the next version of our CBC software, so researchers can experiment with this option. Perhaps we’ll see more research on this subject in a future Sawtooth Software conference.
At first glance, it doesn’t seem logical that adding information regarding worst concepts should help predict what respondents prefer best in holdout sets. But, producing a winning concept involves maximizing good aspects and minimizing bad aspects. Thus, considering both kinds of information may be useful in maximizing the likelihood of consumer choice. As long as worst information comes at little cost and is proven to have little or no bias, then it would appear to be a good idea…which gives us another reason to thank Jordan for his contributions. Jordan may not always be right, but he does make you think. And, that process can lead to important discovery.