Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Best-Worst Case 2 Max Diff Procedure (Step-by-Step)

I am considering conducting my first Best-Worst Case 2 Max Diff so
would please like to know if the approach below is correct?

Assume 3 product Attributes below (so 15 items for max diff design purposes):
(1) Brand (5 levels)
(2) Feature A (5 levels)
(3) Price (5 levels)

Step 1: Generate max diff design in LightHouse Studio as usual but prohibit levels within each attribute from being shown together in a max diff task/screen (e.g., Brand 1/item1 prohibited with Brands 2/3/4/5 [items 2-5]). Test design and it will indicate uneven one/two-way frequencies (which is OK given we're doing Best-Worst Case 2 Max Diff).

Step 2: Export max diff design and alter manually in Excel so that each max diff task/screen shows correct/logical Product Profile attribute order (Brand 1st, Feature A 2nd, Price 3rd). Save altered design as CSV file.

Step 3: Import altered design CSV file into a COPY of the LightHouse Studio project created in Step 1 above. Do not alter anything else in the LightHouse Studio project - so keep prohibitions entered in Step 1 as well as all other design settings (the only thing altered is the design that gets changed when CSV file is imported).

Step 4: Retest max diff design in LightHouse Studio. Design test will indicate Positional Frequencies that are very unbalanced (which is OK given we're doing Best-Worst Case 2 Max Diff).

Step 5: Export design file from LightHouse Studio so I can send to external programming vendor (note: this study will be programmed/hosted outside of LightHouse Studio).

Step 6: Create random dummy data for ~50 respondents and test run HB utilities in LightHouse Studio to ensure HB will run with no errors/issues.

Step 7: Once all data is collected, run default LightHouse Studio HB analysis on Total sample to obtain respondent-level max diff scores.

NOTE: For simplicity of analysis and client understanding, if possible
I would like to use only the Probability Scale (where scores sum to 100 across the 15 items) for ALL results reported to the client.  

Step 8: "Attribute Importance" can be computed by summing Probability Scale scores for items contained in each attribute (e.g., for Brand: Probability Scale scores for items 1-5 would be summed). This would be done within each respondent and then averaged across respondents being analyzed (e.g., Total sample, Males, etc.). Across all attributes,
importance scores would sum to 100 points AND number of points for each attribute indicates attribute importance out of 100 importance points.

Step 9: "Relative Desirability of levels within each attribute" can be computed by dividing each attribute  level's Probability Scale score by the sum of Probability Scale scores across all of the attribute's levels. For example for Brand, you would sum Probability Scale scores for Brands 1/2/3/4/5 and then divide each brand's Probability Scale score by that sum. This would be done within each respondent and then averaged across respondents being analyzed (e.g., Total sample, Males, etc.). Within each attribute, number of points for each attribute level indicates attribute level desirability out of 100 desirability points for that attribute.

Step 10: "Product Desirability" can be computed by summing up Probability Scale scores for attribute levels contained in each product. For our example, you would sum Probability scores for the one level of Brand, one level of Feature A, and one level of Price that is contained in a product.  This would be done within each respondent and then averaged across respondents being analyzed (e.g., Total sample, Males, etc.). This indicates how desirable an INDIVIDUAL product is (out of 100 total desirability points). Note that this study only needs to look at individual products (there is not need for head-to-head product simulations involving multiple products as in conjoint-style simulations).

For Step 10 - Is there also a procedure/formula to translate product desirability number computed in Step 10 into percent of respondents who find the product appealing/have high purchase intent?

Thank you.
asked Mar 6 by anonymous

1 Answer

0 votes
I'm with you through step 7.  

I understand the desire to work with probability-transformed scores, but steps 8-10 have a  different meaning than similar concepts in conjoint analysis.  This is particularly glaring for step 8, where your sum will be strongly affected by the number of levels an attribute has.  Ordinarily with BW-2 we'd compute importances like we do in conjoint analysis, where importance is 100 times an attribute's range divided by the sum of the attributes' ranges.

I think steps 9 and 10 make sense as long as you're clear in letting the end user know how they're computed (I think that step 10 will have a very natural interpretation that clients will "get).

For your last point, measuring absolute level of appeal or PI, if I want to do that with BW-2 I usually add a follow up question to the effect of "would you really buy(choose/acquire/recommend/whatever) or not" either as a yes/no choice or as a 5 point PI scale, say, then I combine that overall evaluation with the BW-2 data to effectively get a "none" utility.  Without that step I don't see how you get away from the fact that MaxDiff utilities on their own are relative measures.
answered Mar 9 by Keith Chrzan Platinum Sawtooth Software, Inc. (90,475 points)
Thank you for the very helpful response.  

To compute Attribute Importance as you described, can you do that still using the Probability Scale scores OR a must you use a different set of scores that result from the default Lighthouse Studio HB run - such as the Zero Centered Raw Scores?

For your last point about adding the followup questions "Would you really buy....", do you somehow include that in your Lighthouse Studio HB run (if so, how to do in the software and how do you then compute/use the "None" utility) OR do you just use the followup question to essentially calibrate or transform"Product Desirability" value as computed in step 10 to reflect respondent purchase intent/appeal?
Is there a resource such as a Sawtooth manual or paper that provides answers to these followup questions as well as further information on step-by-step for BW-2? Thank you.
...