I'm running a maxdiff experiment where the anchor is derived from direct comparisons for a few items based on on-the-fly scores.
The built-in analysis in Lighthouse doesn't seem to work, so I decided to export the data together with design, and code the maxdiff and anchor tasks myself for HB estimation.
My question is how to properly code the tasks and specify the prior covariance matrix, so that my raw scores for the anchor will be set at zero.
As far as I get it, I can dummy-code the items picking one (e.g. last) item as the reference. Then the binary anchor tasks can have anchor as an alternative (like the None option in dual-response-None CBC), and the anchor has to be coded as an additional alt-spec dummy attribute, with 1 for anchor and 0 for any item.
But then my utility estimates will be set to zero for that last item, and the anchor utility will become a random parameter in HB estimation.
So if I want my anchor utility to be set at 0, can I do it by coding my data in a different way, or can I shift all the estimates post hoc by subtracting the anchor utility from all the item utilities, including the last item, for each respondent?
The last should work since exp(item1)/(exp(item1)+...+exp(itemk)+exp(None))=exp(item1+C)/(exp(item1+C)+...+exp(itemk+C)+exp(None+C)), where C is a constant, and given that we apply the logit rule on the individual level.