Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

CBC and MaxDiff in combination

Hi,

we would like to combine CBC and MaxDiff (BWS Case 2) in one survey.
In our study, we have 12 choice tasks with 2 alternatives each. After every choice task of we want to ask the respondents for the best and worst attribute level of the chosen alternative.

1) Which design would be most suitable to analyze both exercises (on an aggregated basis)? Is there one design we can use for both exercises simultaneously; e.g. OMEP? Or would we have to analyze BWS results individually?

2) Regardless of 2), when we ask the respondents for the best and worst attribute level of the chosen alternative in the cbc, how do we technically implement this second question in Lighthouse so that the respondents only see the levels of the alternative they have chosen before? How to address these levels in the software?

Kind regards,
Andrew
asked Jul 24 by Andrew Bronze (780 points)
retagged Jul 24 by Walter Williams

1 Answer

0 votes
Andrew,

Some replies below in CAPS:

we would like to combine CBC and MaxDiff (BWS Case 2) in one survey.
In our study, we have 12 choice tasks with 2 alternatives each. After every choice task of we want to ask the respondents for the best and worst attribute level of the chosen alternative.  IF YOU DO THIS YOU'LL TEND TO OVER-REPRESENT THE HIGHEST UTILITY ATTRIBUTE LEVELS AND UNDER-REPRESENT THE LOWEST UTILITY ATTRIBUTE LEVELS.  THIS MIGHT BE SOMETHING YOU CAN LIVE WITH, DEPENDING ON YOUR RESEARCH OBJECTIVES, BUT IT MIGHT NOT.  YOU MAY WELL WANT TO MAKE A SEPARATE SET OF BWS CASE 2 TASKS, A SET THAT DOESN'T DEPEND ON THE ALTERNATIVES CHOSEN IN THE CBC.  

1) Which design would be most suitable to analyze both exercises (on an aggregated basis); e.g. wih OMEP? Or would we have to analyze BWS results individually?  I THINK YOU'RE BETTER OFF WITH AN EFFICIENT DESIGN THAN AN OMEP; WHETHER YOU MAKE IT IN OUR SOFTWARE OR IN SAS OR IN SOMETHING LIKE NGENE, EFFICIENCY IS WHAT WE USUALLY WANT BUT IF OUR EXPERIMENT IS ASSYMETRIC WE CAN'T GET THERE WITH AN OMEP.  WHICHEVER DESIGN STRATEGY YOU USE, YOU SHOULD BE ABLE TO CODE THE BWS CASE 2 EXPERIMENT IN A WAY THAT IS CONSISTENT WITH THE CBC CODING.  YOU MAY WANT TO TAKE INTO ACCOUNT THE DIFFERENCE IN SCALE PARAMETERS (CBC DATA CONTAINS MORE RESPONSE ERROR) WHEN YOU RUN YOUR COMBINED MODEL.  

2) Regardless of 2), when we ask the respondents for the best and worst attribute level of the chosen alternative in the cbc, how do we technically implement this second question in Lighthouse so that the respondents only see the levels of the alternative they have chosen before? How to address these levels in the software?  THIS WOULD INVOLVE A FAIR AMOUNT OF CUSTOM SCRIPTING TO ACCOMPLISH IN LIGHTHOUSE.  ADDING THIS TO THE FACT NOTED ABOVE THAT IT MAY NOT EVEN BE A DESIRABLE THING TO DO, YOU MAY WELL BE BETTER OFF MAKING A SEPARATE BWS EXPERIMENT.
answered Jul 24 by Keith Chrzan Gold Sawtooth Software, Inc. (47,900 points)
Keith,

thank you very much for your valuable comments.
We see the issues with this approach and the problem with the technical implementation. We reconsidered our plan and adapted it to following: We do a standard cbc with 2 alternatives and use an orthogonal design. Then we use the colums for the 1st alternative in the experimental design for a subsequent BWS task. Regardless of which alternative (A or B) was chosen by the respondents in the cbc, the respondents would have to choose best and worst levels of every alternative A (e.g. task 1 cbc: choose A or B, task 1 bws: pick BEST-WORST of A, task 2 cbc: choose A or B, task 2 bws: pick BEST-WORST of A, and so on). For this reason, an orthogonal design might provide an adequate level balance for the BWS exercise.

Kind regards,
Andrew
...