Thanks for asking about this. Anchored MaxDiff is something we plan to incorporate into a future version of our software.
In the meantime, we have written a data manipulation program that can take the .cho file exported by our software together with a matching file that you create with dual-response answers and automatically build the new file containing both information for LC or HB analysis. This program is available for a modest fee. Email email@example.com if interested.
Regarding the methodology, recent papers at the Sawtooth Software Conference (2012) pointed out the pros and cons of different ways to do anchored MaxDiff. In the end, it seems like a promising way to obtain some sort of absolute scaling to the data (rather than purely relative). BUT, a big caveat is that issue of scale use bias (yea-saying or nay-saying especially) comes back in a strong way. This includes cultural differences in propensity to be positive or negative involving purchase intent or declaration of importance. So, using anchored MaxDiff can be couterproductive if the main reasons MaxDiff was being employed were to reduce cross-cultural effects and to reduce scale use bias. Related to that, developing segments based on anchored MaxDiff scores may be problematic, as the position of the anchor may have an especially driving force in forming the segments...rather than the substantive differences among the items of interest.
Regarding how to rescale the results, one idea that was proposed by a user of ours and that seems useful is as follows:
1. Estimate the scores so that the "importance threshold" is scaled to zero for each respondent (usually by dummy-coding where the reference item is the threshold).
2. Perform the following transformation for each item score for each respondent:
e^Ui / (e^Ui + a – 1)
Ui = raw logit weight for item i
e^Ui is equivalent to taking the antilog of Ui. In Excel, use the formula =EXP(Ui)
a = Number of items shown per set
After this transformation, the threshold value probability value is constant and equal to (1/a).
3. We can further transform so that the threshold value is equal to 100 by multiplying all scores by 100/(1/a).
Notes: After employing step 3, the final scale runs from 0 to a maximum possible score of 100*a, with 100 indicating the threshold of importance. Each respondent does not get an equal sum of scores, so some respondents have more influence on population means than others.
We're still pondering approaches to the rescaling transformation for anchored MaxDiff, so if you have a good idea, please share it.