Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Anchored Bandit MaxDiff

Have you ever tried anchoring (Dual Response or Direct Binary Response) with:
- Bandit MaxDiff, when dealing with large number of items and respondents evaluating only subset of items?
- Boosted Bandit MaxDiff, when all items are shown to every respondent?
Any comments on that are more than welcome:).
asked Oct 31, 2018 by RafalNeska (450 points)

1 Answer

0 votes
Best answer
Rafal, good to hear from you!  I've tested both of those cases internally not with real respondents, but with a few test records.  I've tested it enough and thought through the math to believe that anchoring with Bandit MaxDiff is working properly and can work in practice.  

With Boosted MaxDiff where all items are shown to each respondent (but best items are oversampled), it seems very straightforward.  I'd recommend not showing every item to the respondent for direct rating on the 2-point scale for the anchoring.  Rather, I'd recommend building a list using the MaxDiff Scores on the Fly, where you take every 1st, 5th, 10th, 15th, 20th item into the list (or 1st, 3rd, 5th, etc.).  Explain to the respondent that the computer will do its best based on the limited information to order the items from best to worst for them (though it might not quite be perfect), and we are now looking for buy/no buy indication on each of the items.  Direct anchoring follow-up questions do not need to be asked of every item for every respondent.  That anchoring step can support missing data on the anchoring question for certain items.  And, it seems excessive to ask respondents about every item in the follow-up anchoring questions.

Regarding Bandit when there are a large number of items and each respondent only sees a subset.  Again, you'll only want to ask respondents about a subset of the items.  And, you could use the Bandit MaxDiff constructed list command to move only a subset of the items onto the list for the direct anchoring questions.  The trick to remember is that the Bandit MaxDiff constructed list command puts the expected best item in 1st position, the expected second-best item in 2nd position, etc.  If you are using the default Bandit MaxDiff command, then the first 5/6 of the items are being drawn among the most preferred items for the sample.  The remaining 1/6 of the items (at the bottom of the constructed list) are drawn from the items seen fewest times at this point, and it's almost always the case (after the first 50 or so respondents have completed the survey) that these items are among the least favored by the sample.  The point of all this explanation being that you should ask the direct anchoring question about a range of items from best to worst.
answered Oct 31, 2018 by Bryan Orme Platinum Sawtooth Software, Inc. (164,515 points)
selected Nov 27, 2018 by RafalNeska
Oh, and if it isn't obvious: For boosted Bandit MaxDiff where each item is shown to each respondent multiple times, HB estimation or Latent Class is assumed.  For Bandit MaxDiff where there are a great many items and each respondent doesn't see all items, then aggregate logit (or 1-group latent class) is assumed.
Hi Bryan, great to hear from you too:), many thanks for the detailed explanation.
Anchored Boosted/ Bandit MaxDiff - HB model