Maximum Difference Scaling is widely used to measure the relative values of items/attributes. Despite the strengths of MaxDiff, some analysts would prefer data that represented more than just relative scores; they would prefer absolute scores scaled with respect to each respondent’s importance threshold. In this article by Kevin Lattery (Maritz Research), Kevin tested two methods for anchoring MaxDiff scores to a threshold: Dual-Response MaxDiff suggested by Louviere and a more direct method asking respondents to choose which attributes are above a threshold (using 2-point scale grid questions).
He determined that theoretically (using synthetic respondent data) the direct method would be superior, especially as the number of attributes shown in a MaxDiff task increases. With six or more attributes shown per screen the indirect dual-response method should not be used, and even five attributes per screen may not capture individual anchoring that well. In comparing the two methods with human respondents, showing only four attributes per screen, results were very similar. The rank order of utilities at the respondent level was nearly identical. However, the anchoring in the direct method was more biased by the context of the total set of attributes. So if it is important for one to have a more neutral anchor for utilities then the indirect dual-response method may be slightly better, assuming four (and certainly no more than five) attributes are shown per screen.
(Originally published in the 2010 Sawtooth Software Proceedings).