Utilities are interval-level measures, separately centered on zero for each attribute.

If you just want non-zero utilities, you can, for each attribute separately, subtract the lowest utility for any level of an attribute from the utilities for all the levels of that attribute. For example if one variable had utilities of -.5, -.25 and .75, you could subtract -.5 from all of these to get 0, .25 and 1.25, respectively. Call this transformation zero-basing.

If you further wanted to stretch them to range from 0 to 100, then you'd find the attribute with the largest range. Say it's an attribute with utilities (after zero-basing) of 0, .29, .76 and 1.25. Divide 100 by the largest of these, 1.25 and you get 80. If you multiply all the utilities by 80 you get 0, 23.2, 60.8 and 100, respectively for this attribute. Now multiply all the zero-based utilities of all the attributes by 80 and you're done.

Of course you DON'T want to use these transformed utilities in your simulator, just as we display zero-centered diffs but we use the raw utilities in our simulator.

One more thing to watch, if you have a none utility that you want to report, how you transform your other utilities for reporting will affect how you want to transform it.

So just to make sure I understand...

if I've now zero based all of my attributes correctly, all of the lowest levels in each attribute now = 0.

If I wanted to rescale to 100, I would basically take the max number across all levels of all attributes, and divide that by 100 to get my multiplication factor. So if my max zero based utility across all levels of all attributes is 77, then 100/77 = 1.3-ish, so I'd go ahead and multiply every level of every attribute by 1.3?

Follow up question #1) What would be next steps for transforming the "None" utility in the case where i only do the zero basing as well as a case where I do the zero basing as well as rescale to 100?

Follow up question #2) Do I zero base my utilities based off of raw scores or zero centered?