Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Holdout tasks to include to estimation

Hi everyone,
I am at the moment conducting a CBC study. I have now received 340 respondents for a CBC including 10 random tasks (with 4 concepts + none-option) and 3 fixed holdout tasks, where 2 tasks are the same concepts after random task 3 and 9 to see whether respondents answered the same way when having the same offering. The third holdout is to test validity (after random task 6).

The following questions advanced prior to estimating the data with HB:

1.  Does it make sense to include one of the holdout tasks that are used for reliability to the estimation? If so why?

2. I checked for reliability by giving a 1 if respondent selected the same concept in both holdout tasks and 0 if not. I only had a reliability of about 60 %. Is this a very low figure? How could I interprete it?

3.  In Assessing the Validity of Conjoint Analysis – Continued (1997) (https://www.sawtoothsoftware.com/download/techpap/assess2.pdf)  the formula of Wittink and Johnson (1992)at page 5 of the paper was used to calculate the highest possible hit rate given a certain test-retest reliability. Is this also applicable for CBC and validation based on HB estimation (I would calculate validity as explained in https://sawtoothsoftware.com/forum/12998/computing-holdout-validity-for-the-overall-sample?show=12998#q12998 )

4. I want to use the holdout tasks that I use to check test-retest reliability to also use to calculate validity with hit rate. Is it advisable ?  Can I  include both tasks that I used  for reliability assessment or only one (as  they have the same concepts onl in different order)

Thank you in advance for helping me out.

Best regards,

Marie Belen
asked May 15, 2018 by Marie Belen

1 Answer

0 votes
Dear Marie Belen,

I try to answer your questions as good as possible.

1.  Having only one holdout task to assess the validity of your models will only provide you with little  information  about how valid your model is, also when you compare the hit rate of different models. Including the other holdout would improve the amount of information but still results in weak information.

2. With just one pair of identical tasks, you may not have enough information to obtain an accurate measure of test-retest reliability. In a similar question Brian Orme gave hints of why it might be lower than expected and that with 4 concepts  per  task a very reliable group would have 75 % (https://sawtoothsoftware.com/forum/8599/retest-reliabilty?show=8599#q8599), so you could still argue that it is a reliable but not very reliable group.

4. As stated in 1. having more holdout questions provides you with more information when assessing validity than only one question. I would therefor suggest to add both tasks as you mentioned  the different order of the concepts.
answered May 28, 2018 by botmar (395 points)