Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Threshold for prevention rules in Max Diff evaluation

The Max Diff evaluation will be used to test a list of delivery options which includes the two delivery options below:

•    Option 1: Free 2-7 Day Shipping
•    Option 3: Free Next Day Shipping


We do not believe that these two options should be shown together in the Max Diff Exercise and also believe that Option 3 should always win when compared to Option 1. We understand that we may be able to program a rule that prevents these two options from being shown together, but would like to understand how the program accounts for the missing comparison in this case and how this scenario would potentially play out in the Max Diff results. Any input you can provide to help us fully understand the implications of a prevention rule like this is greatly appreciated.

For clarification, this example we’ve provided is one of a larger set of scenarios that we would like to potentially create rules for. Therefore, any input you can provide regarding a threshold for the number of prevention rules as well as how the implications change as more prevention rules are added is greatly appreciated.
asked Mar 8 by Ethan L

1 Answer

0 votes
Ethan,

We know with conjoint analysis that adding more than a handful of prohibitions like this can really cause problems.  But if conjoint analysis is a grumpy old bulldog, MaxDiff is like a friendly puppy roll it over, ruffle its ears and it's still happy with you:  with MaxDiff you can add a number of these prohibitions (including in once case I did recently almost a third of all possible pairs) and it will perform very nicely for you.  

The reason this works is let's say we have items A, B, C . . .Z.  We don't want A and B to appear together in a question, but across our experiment we have plenty of respondents for whom A appeared with M and B appeared with M in different questions.  And so on for K, which for some respondents appears with both A and B in various questions.    With MaxDiff we have so many of these indirect connections between A and B that even if they never show together in a question you'll get nice utilities for both of them.

Now, if you have a large number of such prohibitions, it's probably a good idea to test your design by putting in some artificial data and making sure the design estimates utilities for you.  Ideally you'd put in not just random data but data from artificial respondents who had actual utilities for each attribute, so that you could test how well the model recovers those "known" utilities with designs that have and designs that lack your prohibitions.  This kind of simulation study is pretty laborious, however, so I think users rarely do it except for methodological comparisons done for academic purposes.
answered Mar 8 by Keith Chrzan Platinum Sawtooth Software, Inc. (72,475 points)
Follow Up Question
...