Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

ACBC Test Design: Thumb rule for D-Efficiency

Hi,

I am conducting an ACBC analysis and would like to get some indications, whether my conjoint design is reasonable. Hence, I made use of the design testing module.

While trying to interpret the obtained results, I came across some other contributions in that forum, where I found some guidelines, such as for instance with regard to the reported standard error:

 "As a basic rule of thumb, we recommend trying to strive for a questionnaire and sample size so that the largest standard error across any of the attribute levels is somewhere around 0.05 for main effects (preferably 0.03)"

I was delighted to notice, that my values remained below that frontier.

However, as regards d-efficiency, I couldn't find a similar thumb rule. Does anyone know, whether there is an accetable range as well? FYI: My d-value
is 0.536 (I accumulated the single values and divided it by my sample size).  Or doesn't it make sense to try to interpret this value? If not, are there any other options, that allow me to make any statements regarding the efficiency of my conjoint design?

Many thanks in advance!
asked Jan 15 by Daniel

1 Answer

+1 vote
Good questions.  Those D-efficiencies reported in the ACBC test design area are individual-level D-efficiencies (computed for each respondent, just considering the choice tasks that the respondent received).  They guide the level swapping and relabeling steps that happen in the design algorithm.  We haven't developed rules of thumb for those (regarding how large they should be).   And, since we don't perform purely individual-level estimation, it doesn't seem especially relevant to me.

Both CBC and ACBC designs are rather sparse at the individual level (we expect individual-level D-efficiencies can become relatively low as the attribute list grows fairly large as is often the case in ACBC), so usually what the researcher is more concerned with are essentials about how often each attribute level appears per respondent, plus how well the overall estimation will work when HB is applied (e.g. the overall quality of the design considering all respondents).  HB leverages all the data (across all respondents) to estimate individual-level utilities, overall population mean utilities, and overall population covariances.  So, it is working with much more information than just a single individual's choice tasks.

So, other rules of thumb seem to apply better for ACBC: each attribute level should appear at least 2x and preferably 3x per respondent.  And, the overall standard errors (as reported in the software using pooled logit estimation) should be 0.05 or less.  Those are things the software automatically reports.  But, they are based on random-responding robotic respondents.  These tend to be fairly good for representing what might happen when you interview real humans, but they are never perfect.

More sophisticated things could be done with a great deal of extra programming (programming robotic respondents to answer similar to how real respondents do) and comparing known true utilities to the estimated utilities from the robotic respondents.
answered Jan 15 by Bryan Orme Platinum Sawtooth Software, Inc. (148,340 points)
Great! Thank you very much for the detailed information!
...