All these methods you speak of involve an exponential transformation, so the resulting scaling is more like a probability scale (rather than the original scaling of the utilities, where the utilities are interval-scaled and can have positive and negative values).

The "Purchase Likelihood" transformation for each respondent would just be e^u1/(e^u1+e^0), where “u1” is the total utility of the product concept. Since we’re often working with zero-centered utilities, this means that the purchase likelihood simulation method reflects the likelihood of picking this product concept from a set including this product concept plus one other product of average utility. (Back in the days of ACA and CVA, the utilities could be given scaling based on respondents’ stated purchase intent for specific product concepts on a 100-point scale, so the resulting exponential transformation indeed was a least-squares fit to respondents’ stated purchase intent for product concepts.)

Simulating a single product vs. the None would be e^u1/(e^u1+e^UNone), where UNone is the utility of the None concept. Since the None concept utility differs per respondent, those respondents who pick the None a lot would have their shares of preference influenced more toward 0%. Thus, people who pick None almost exclusively have their shares for product 1 driven mostly to zero. And, respondents who never picked the none would have their shares weighted up consistently toward 100%. So, the sensitivity from such respondents for discriminating between products of different quality would be diminished.

Even better, as you mention, would be to simulate a product’s share of preference relative to a realistic set of competitors, as seen in the market place.

In all these cases, we’re making the results more intuitive for managers and others to understand results we want to present to them. The values are all positive, with quasi-probability scaling.