The objectives of the Adaptive CBC interview are as follows:
•Provide a stimulating experience that will encourage more engagement in the interview than conventional CBC questionnaires.
•Mimic actual shopping behavior, which may involve non-compensatory as well as compensatory behavior.
•Screen a wide variety of product concepts, but focus on a subset of most interest to the respondent.
•Provide more information with which to estimate individual partworths than is obtainable from conventional CBC analysis.
Typically, an ACBC interview includes the following three core sections:
But, ACBC offers the flexibility to skip sections for advanced researchers who decide (for certain research situations) that some sections do not apply as well to the buyer's (chooser's) decision making process.
BYO (Configurator) Section:
In the first section of the interview the respondent answers a "Build Your Own" (BYO) question that introduces the attributes and levels and where respondents indicate their preferred level for each attribute, taking into account any corresponding feature-dependent prices. A typical screen for this section of the interview is shown below:
An alternate display incorporates combo boxes rather than radio buttons:
Past research has shown that respondents enjoy BYO questions and that the resulting choices have lower response error than for CBC questions.
Based on answers to the BYO questionnaire, we create a pool of product concepts that includes every attribute level, but in which the attribute levels are relatively concentrated around the respondent's preferred attribute levels.
Advanced: Researchers sometimes drop the BYO section (by indicating that all attributes should not be included in the BYO section). This has an effect on the later product concepts shown to respondents. If no BYO section is shown, then we equally sample across all levels within the experimental design, much like a standard CBC experiment (rather than oversampling the BYO-selected levels).
The BYO-selected product concept can be shown in later sections of the Lighthouse survey using Sawtooth Script Functions.
Some attributes with obvious a priori preference order would seem to be out of place within a BYO question. It may not make sense in some situations to assign price premiums to the preferred levels, and it would be "obvious" to ask respondents to indicate their preferred level if there is no price penalty. In those cases, you can drop such attributes from the BYO question type (and these attributes appear within the remaining sections of the ACBC survey).
For attributes without obvious a priori preference order, it would generally make sense to include them within the BYO question to learn respondents' preferences. Then, ACBC can focus on product concepts similar to those preferences. Even so, researchers may choose to exclude such attributes from the BYO section, if they desire.
In the second section of the interview the respondent answers "screening" questions, where product concepts are shown a few at a time (we recommend showing 3 to 5 at a time per screen, for about 7 to 9 total screens of concepts). In the Screening Section, respondents are not asked to make final choices, but rather to build a consideration set of product concepts by indicating whether each one is "a possibility" or "not a possibility." A typical screen from this section of the interview is shown below:
The Screening Section is also used to estimate the "None" parameter threshold.
Advanced: Researchers may choose to drop the Screening Section from their project design and in that case all generated product concepts are taken forward into the Choice Tasks Section, as if the respondent had indicated that each one was "a possibility." If you skip the Screening Section, a "None" parameter threshold is not available, unless you include the final Calibration Section and also perform the additional step of calibrating the estimated utilities.
After a few screens of concepts have been evaluated, we scan previous answers to see if there is any evidence that the respondent is using non-compensatory screening rules (meaning, that there are cut-off rules that are absolutes and cannot be compensated for by the presence of enough other good features). For example, we might notice that he/she has avoided some levels of an attribute, in which case we ask whether any of the consistently avoided levels is an "Unacceptable". Here is a typical screen for this question:
Past research with ACA has suggested that respondents are perhaps too quick to mark levels as unacceptable that are probably just very undesirable. We considered that the same tendency might apply here. To reduce this possibility, we only offer cutoff rules consistent with the respondent's previous choices and we allow the respondent to select only one cutoff rule per prompt.
After each screen of typically three to five products has been screened (as a "possibility" or not), another "unacceptable" screen is shown and the respondent has another opportunity to add a subsequent cutoff rule. If the respondent identifies any "unacceptable" levels, then all further concepts shown will avoid those levels.
Unacceptable questions provide a less aggressive way for respondents to indicate non-compensatory, cut-off rules than "Must-Haves" (described below), and for that reason we recommend giving it precedence in the ACBC questionnaire. A recent split-sample research project we conducted suggested that the Unaccepable question be asked first. We suggest waiting to ask Must-Have questions until at least two Unacceptable questions have been asked. By default, we do not ask respondents if a certain price level is unacceptable (for Summed Pricing), but you can change that if you'd like (on the Pricing tab). If you include price in Unacceptables questions, ACBC scans the previous answers to determine the highest price ever selected as "a possibility" in the Screener or BYO questions. That highest price is offered as an unacceptable threshold.
The fact that a level has been marked unacceptable can be used in skip patterns (such as to ask a follow-up question regarding why a level was marked unacceptable) using Sawtooth Script Functions.
If the only products that a respondent has marked "a possibility" contain certain attribute levels (or ranges of levels for ordered attributes), we ask whether that level is a Must-Have. For example:
After each page of typically three to five products has been screened (as a "possibility" or not), another "must have" screen is shown and the respondent has another opportunity to add a subsequent cutoff rule. If the respondent identifies any "must have" levels, then all further concepts shown will satisfy those requirements.
The fact that a level has been marked a must-have can be used in skip patterns (such as to ask a follow-up question regarding why a level was marked a must-have) using Sawtooth Script Functions.
Choice Tasks Section:
Once the respondent has completed the planned number of screens of Screening questions (typically 7 to 9 screens, where each screen includes 3 to 5 concepts), we transition to the Choice Tasks Section (tournament). The respondent is shown a series of choice tasks presenting the product concepts in the consideration set (those marked as "possibilities") typically in groups of three, as in the screen below:
At this point, respondents are evaluating concepts that are somewhat close to their BYO-specified product, that they consider "possibilities," and that strictly conform to any cutoff (unacceptable/must-have) rules. To facilitate information processing, we gray out any attributes that are tied across the concepts, leaving respondents to focus on the remaining differences. Any tied attributes are typically the most key factors (based on already established cutoff rules), and thus the respondent is encouraged to further discriminate among the products on the features of secondary importance.
The winning concepts from each triple then compete in subsequent rounds of the tournament until the preferred concept is identified. If displaying concepts in triples, it takes t/2 choice tasks to identify the overall winner, where t is the number of concepts marked as "possibilities" from the previous section (in the case that t is odd and t/2 is not an integer, one rounds down to determine the required number of triples).
Although it may seem to some that the goal of the tournament section is to identify an overall winning concept, the actual goal is to engage respondents in a CBC-looking exercise that leads to good tradeoff data for estimating part-worth utilities. The winning product concept can be shown in later sections of the Lighthouse survey using Sawtooth Script Functions.
Advanced: Researchers may drop the Choice Tasks section by indicating that no concepts should be carried forward into the Choice Tasks.
Calibration Section (Optional):
The fourth section of the interview is optional and may be used to estimate a different "None" parameter from that provided by the Screening Section.
The respondent is re-shown the concept identified in the BYO section, the concept winning the Choice Tasks tournament, and (typically) four others chosen from among both previously accepted and rejected concepts. For each of those concepts we ask how likely he/she would be to buy it if it were available in the market, using a standard five-point Likert scale, with a screen similar to the one below:
This section of the interview is used only for estimation of a partworth threshold for "None." Partworths from other sections of the interview are used to estimate the respondent's utility for each concept, and then a regression equation is used to produce an estimate of the utility corresponding to a scale position chosen by the researcher, such as "Probably Would." Within the market simulator, if the utility of a product concept exceeds the None utility threshold, it is chosen.
The order of Calibration Concepts is as follows:
•"Not a possibility" concept
•A "winner" from the tournament
•A "loser" from the tournament
•(Repeat pattern of last three, if needed)
•Winning concept from the tournament
Based on our experience, we recommend the following flow for an Adaptive CBC questionnaire:
Page # Description
|2.||BYO (or Most Likelies)|
|4.||Screener #1 (showing about 4 concepts)|
|5.||Screener #2 (showing about 4 concepts)|
|6.||Screener #3 (showing about 4 concepts)|
|8.||Screener #4 (showing about 4 concepts)|
|10.||Must Have #1|
|11.||Screener #5 (showing about 4 concepts)|
|13.||Must Have #2|
|14.||Screener #6 (showing about 4 concepts)|
|16.||Must Have #3|
|17.||Screener #7 (showing about 4 concepts)|
|19.||Must Have #4|
|20.||Screener #8 (showing about 4 concepts)|
|22.||Choice Tasks Set #1|
|23.||Choice Tasks Set #2|
|24.||Choice Tasks Set #3, etc. until winning product is identified|
|26.||Calibration Concept #1 (optional)|
|27.||Calibration Concept #2 (optional)|
|28.||Calibration Concept #3 (optional)|
|29.||Calibration Concept #4 (optional)|
|30.||Calibration Concept #5 (optional)|
|31.||Calibration Concept #6 (optional)|
While the schematic above gives a general recommendation regarding an appropriate ACBC interview, the recommended number of questions in each section depends on the number of attributes in the study and how accurate results are desired at the individual level. For more detailed recommendations, please see the next section entitled Design Tab (ACBC).
In our experience, ACBC questionnaires take about 7 to 15 minutes (median survey length) when respondents are asked to trade off 9 to 10 attributes each having from 2 to 6 levels. While this is longer than traditional CBC questionnaires, respondents find the process more engaging and we capture more information to model each respondent accurately.
The interview as a whole attempts to mimic the actual in-store buying experience that might be provided by an exceptionally patient and interested salesperson. For example, after the BYO section we might explain that this exact product is not available but many similar ones are, which we will bring out in groups to see whether each is worthy of further interest. The Choice Tasks section is presented to the respondent as an attempt to isolate the specific product which will best meet the respondent's requirements (though in reality the main purpose for the researcher is to further refine the utility estimates across all non-rejected attribute levels).
If the respondent has answered conscientiously, he/she will find that the final product identified by ACBC as best is actually more preferred than the original BYO product. This occurs because the overall prices of the products generated in the product pool are varied from the fixed BYO prices (assuming "Summed Pricing" has been used). Therefore, typically at least one of the product concepts will have better features than the BYO product at the same price, the same features at a lower price, or a combination of these benefits. This makes it seem that the ACBC interview has actually done a good job finding a product that exceeds the quality of the BYO product and fits the needs of the respondent.