Some replies below in CAPS:
we would like to combine CBC and MaxDiff (BWS Case 2) in one survey.
In our study, we have 12 choice tasks with 2 alternatives each. After every choice task of we want to ask the respondents for the best and worst attribute level of the chosen alternative. IF YOU DO THIS YOU'LL TEND TO OVER-REPRESENT THE HIGHEST UTILITY ATTRIBUTE LEVELS AND UNDER-REPRESENT THE LOWEST UTILITY ATTRIBUTE LEVELS. THIS MIGHT BE SOMETHING YOU CAN LIVE WITH, DEPENDING ON YOUR RESEARCH OBJECTIVES, BUT IT MIGHT NOT. YOU MAY WELL WANT TO MAKE A SEPARATE SET OF BWS CASE 2 TASKS, A SET THAT DOESN'T DEPEND ON THE ALTERNATIVES CHOSEN IN THE CBC.
1) Which design would be most suitable to analyze both exercises (on an aggregated basis); e.g. wih OMEP? Or would we have to analyze BWS results individually? I THINK YOU'RE BETTER OFF WITH AN EFFICIENT DESIGN THAN AN OMEP; WHETHER YOU MAKE IT IN OUR SOFTWARE OR IN SAS OR IN SOMETHING LIKE NGENE, EFFICIENCY IS WHAT WE USUALLY WANT BUT IF OUR EXPERIMENT IS ASSYMETRIC WE CAN'T GET THERE WITH AN OMEP. WHICHEVER DESIGN STRATEGY YOU USE, YOU SHOULD BE ABLE TO CODE THE BWS CASE 2 EXPERIMENT IN A WAY THAT IS CONSISTENT WITH THE CBC CODING. YOU MAY WANT TO TAKE INTO ACCOUNT THE DIFFERENCE IN SCALE PARAMETERS (CBC DATA CONTAINS MORE RESPONSE ERROR) WHEN YOU RUN YOUR COMBINED MODEL.
2) Regardless of 2), when we ask the respondents for the best and worst attribute level of the chosen alternative in the cbc, how do we technically implement this second question in Lighthouse so that the respondents only see the levels of the alternative they have chosen before? How to address these levels in the software? THIS WOULD INVOLVE A FAIR AMOUNT OF CUSTOM SCRIPTING TO ACCOMPLISH IN LIGHTHOUSE. ADDING THIS TO THE FACT NOTED ABOVE THAT IT MAY NOT EVEN BE A DESIRABLE THING TO DO, YOU MAY WELL BE BETTER OFF MAKING A SEPARATE BWS EXPERIMENT.