We are currently analyzing ACBC survey data, from a pool of 90 respondents, with 7 attributes with 5 levels each. The initial, unconstraied analysis revealed some irregularities in the part-worth utilities, and we thus applied constraints to correct.
However, when we do so, while we fix the part-worth utility order issues, we get attribute importance scores that do not follow the raw attribute importances - see below.
Att #1 13.48205 16.90411
Att #2 14.07452 14.95299
Att #3 14.34129 11.6028
Att #4 14.34448 11.22941
Att #5 15.43688 12.31093
Att #6 15.15877 11.85296
Att #7 13.162 21.1468
We've looked into the design, programming and analysis processes, and we can't see a mistake that could explain this. We apply constraints to all attributes. Comparing the preference order in the raw data and the one used to constrained, there are two notable directional irregularities (attributes we assume are best to worst, are largely best to worst in the raw data too), for one attribute the part-worth utilities do not follow any direction, and for the other follow a bell-shaped curve, when we assume for both a best/worst relationship.
Do you have an idea of what may be the underlying cause of the issue? Too much variability in the sample perhaps, meaning that too much of the real data is "ignored" when we apply constraints? Can it be due to the irregularities noted above, and that our best/worst assumptions are wrong (even if we cant explain why respondents would answer the way they have)?
Thank you for your help in this.