These are excellent questions. Thanks for posting.

If you believed that specific 4-way combinations of attribute levels (from your 3x3x3x3 design) created special circumstances that could not be predicted accurately by summing the "main effect" preferences just from each attribute considered independently, then indeed you'd need to estimate more than just the 9 "essential" (main effect) parameters. For your 3x3x3x3 design, if 4-way interaction effects were significant, you'd want to be estimating all 81 parameters. (But respondents wouldn't be able to give you so much information to support such estimation at the individual level using the CVA approach!)

However, if you believe that preference for any of the 81 combinations could be pretty accurately predicted by just adding the independent preference scores (main effect utilities) across the four separate attributes, then you'd have just the 9 parameters to estimate (#Total_Levels - #Total_Attributes + 1). There are middling positions, such as considering some first-order interaction effects (interactions between attributes taken two at a time), if you generate an experimental design that can support 2-way interaction effects (not automatically supported by CVA software--but possible if you take two of your attributes and collapse them into one 9-level attribute).

If you are using standard ratings-based traditional full-profile conjoint (as embodied in our CVA software, or as also introduced first in the 1970s literature from Paul Green)...then you will typically consider just estimating the main effects using a fractional factorial plan that supports it.

Our founder, Rich Johnson, wanted to give very conservative recommendations in our CVA software, so he wanted the documentation to recommend at least 2x as many conjoint questions as parameters to be estimated, or preferably even 3x as many. Most researchers (particularly if they have access to HB estimation for CVA) are willing to do 1.5x as many questions as parameters to estimate (to reduce the burden on respondents without sacrificing much precision due to the superiority of HB estimation over OLS).

Assuming you have substantial sample size, I would recommend using about 15 CVA conjoint tasks (easily divisible by 3, since each of your attributes has 3 levels), leading to 15/9 = 1.66x as many conjoint questions as parameters to estimate.

CBC (Choice-Based Conjoint) is a different animal. Rather than asking for a rating of strength of preference for each conjoint card, it typically asks for the respondent's choice of one conjoint card among a set of conjoint cards (such as 3 to 5 concepts within the same CBC task). From a statistical standpoint, you obtain less information per unit of respondent effort compared to ratings-based conjoint methods. You only learn which concept was preferred within the task, not by how much it was preferred. But, asking for choices is much more realistic than asking respondents to use a rating scale. Over the last 20 years, CBC has emerged as the gold standard methodology (though it typically requires a bit larger sample sizes than CVA conjoint, due to the lower amount of information collected per unit of respondent effort).

So, asking 18 CVA conjoint questions (18 ratings for conjoint cards) isn't the same as asking 6 CBC tasks each with 3 concepts per task. That's an apples-to-oranges comparison.

I'm more clear about the number of task that should be asked.

However, I'm still confused about which are the parameters for CVA and CBC (having 4 attributes with 3 levels each, leading to 12 part-worth utilities)

CVA: using the regression model and dummy coding the 9 parameters to be estimated are the 8 level beta weights (3 levels per attribute reduced one level from each) plus the beta for the intercept

CBC: Are the same 9 parameters on CBC? Which are the equivalent for these 9 parameters?

Thanks.

edited Oct 30, 2014