Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Higher Standard errors of the part-worth utilities than 0.05

Dear Bryan,

I am currently running an ACBC study as well and since the RLH/LL is difficult to interpret I used the Test Design to calculated the standard errors of the part-worth utilities as suggested.

Basically those result are very good, except for the levels of one attribute. The highest value here is 0,0892. Could it be, that this is because the attribute has a lot of levels (21) and I was using a constructed list in the design?

Thank you for your help!
related to an answer for: ACBC: Goodness of fit
asked Feb 25 by Julia

1 Answer

0 votes
Julia, you are correct:  anything that reduces the frequency with which respondents see a level (e.g. the attribute has many levels, the attributes come in on a constructed list, the attributes are alternative-specific) will tend to make the standard errors larger.
answered Feb 25 by Keith Chrzan Platinum Sawtooth Software, Inc. (53,775 points)
Dear Keith,

thank you for your answer. I have one more question regarding my research.

My thesis is not about products but about employees and which attributes/attribute levels are the most important to applicants. Therefore I am not only researching which “product” or in my case employee is the best but which attributes have the highest utility score and are hence the most important. Same for the levels – In another research paper the authors “calibrated” the utility scores of each attribute level to make it possible to create a ranking of all attribute levels based on their utilities. Is there any possibility to that?

As I understand from the paper "Introduction to Market Simulators for Conjoint Analysis " it only helps to compare products and their share of choice but not single attribute levels.

Best regards,
Dear keith,

I already did that it’s rather about creating a rank order of all attribute levels. Is there a possibility to compare the utility of all attribute levels and built a ranking order ?
Not really.  Each attribute's levels are separately centered at arbitrary zeros.  Because there is no guarantee that the zero point on Attribute A is the same as the zero point for Attribute B, you typically cannot make cross-attribute level comparisons.  There is one kind of conjoint model, called best-worst conjoint, that does allow this, but it requires a special set-up and analysis.
Alright, thank you for the clarification!