Some CBC software packages estimate the efficiency of the design on a 100-point scale relative to a theoretical ideal design with the same attribute structure and choice tasks. But, our CBC software does not.
Rather, our CBC's default "balanced overlap" designs seek to maximize one-way and two-way level balance, with a target degree of level overlap within each choice task (level repeating within the same choice task). It is known that if you achieve perfect one-way and two-way level balance, then you've got an extremely efficient design, and extremely low D-error.
If you only are examining a single design (rather than making comparisons in the relative D-efficiency of two designs), then to assess whether it's going to work well when the rubber hits the road, we recommend creating robotic respondents who answer randomly with the target sample size you plan to collect, estimating an aggregate logit model, and examining the standard errors of the estimates. This is what is done automatically by our software when you do the Test Design step. Our rule of thumb (based on many years of experience) is that you look for standard errors for the main effect estimates of 0.05 or less. So, this approach jointly assesses the quality of the design and the adequacy of your sample size.
What I'm trying to say is that you could have a CBC design that was 100% efficient, but this wouldn't matter much if your sample size was inadequate. You'd get very bad results even though you were using an optimally efficient design with 10 people when you really should be using a much larger sample size, for example.
Another way to think about it is that if you were using a sub-optimal design with 80% efficiency, you could make up for that lack of efficiency just by interviewing 1/0.8 = 1.25, or 1.25x as many respondents.