I am considering conducting my first Best-Worst Case 2 Max Diff so
would please like to know if the approach below is correct?
Assume 3 product Attributes below (so 15 items for max diff design purposes):
(1) Brand (5 levels)
(2) Feature A (5 levels)
(3) Price (5 levels)
Step 1: Generate max diff design in LightHouse Studio as usual but prohibit levels within each attribute from being shown together in a max diff task/screen (e.g., Brand 1/item1 prohibited with Brands 2/3/4/5 [items 2-5]). Test design and it will indicate uneven one/two-way frequencies (which is OK given we're doing Best-Worst Case 2 Max Diff).
Step 2: Export max diff design and alter manually in Excel so that each max diff task/screen shows correct/logical Product Profile attribute order (Brand 1st, Feature A 2nd, Price 3rd). Save altered design as CSV file.
Step 3: Import altered design CSV file into a COPY of the LightHouse Studio project created in Step 1 above. Do not alter anything else in the LightHouse Studio project - so keep prohibitions entered in Step 1 as well as all other design settings (the only thing altered is the design that gets changed when CSV file is imported).
Step 4: Retest max diff design in LightHouse Studio. Design test will indicate Positional Frequencies that are very unbalanced (which is OK given we're doing Best-Worst Case 2 Max Diff).
Step 5: Export design file from LightHouse Studio so I can send to external programming vendor (note: this study will be programmed/hosted outside of LightHouse Studio).
Step 6: Create random dummy data for ~50 respondents and test run HB utilities in LightHouse Studio to ensure HB will run with no errors/issues.
Step 7: Once all data is collected, run default LightHouse Studio HB analysis on Total sample to obtain respondent-level max diff scores.
NOTE: For simplicity of analysis and client understanding, if possible
I would like to use only the Probability Scale (where scores sum to 100 across the 15 items) for ALL results reported to the client.
Step 8: "Attribute Importance" can be computed by summing Probability Scale scores for items contained in each attribute (e.g., for Brand: Probability Scale scores for items 1-5 would be summed). This would be done within each respondent and then averaged across respondents being analyzed (e.g., Total sample, Males, etc.). Across all attributes,
importance scores would sum to 100 points AND number of points for each attribute indicates attribute importance out of 100 importance points.
Step 9: "Relative Desirability of levels within each attribute" can be computed by dividing each attribute level's Probability Scale score by the sum of Probability Scale scores across all of the attribute's levels. For example for Brand, you would sum Probability Scale scores for Brands 1/2/3/4/5 and then divide each brand's Probability Scale score by that sum. This would be done within each respondent and then averaged across respondents being analyzed (e.g., Total sample, Males, etc.). Within each attribute, number of points for each attribute level indicates attribute level desirability out of 100 desirability points for that attribute.
Step 10: "Product Desirability" can be computed by summing up Probability Scale scores for attribute levels contained in each product. For our example, you would sum Probability scores for the one level of Brand, one level of Feature A, and one level of Price that is contained in a product. This would be done within each respondent and then averaged across respondents being analyzed (e.g., Total sample, Males, etc.). This indicates how desirable an INDIVIDUAL product is (out of 100 total desirability points). Note that this study only needs to look at individual products (there is not need for head-to-head product simulations involving multiple products as in conjoint-style simulations).
For Step 10 - Is there also a procedure/formula to translate product desirability number computed in Step 10 into percent of respondents who find the product appealing/have high purchase intent?