Can anybody help me with this issue, please? I would really appreciate it!

Thank you in advance,

Frank

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Can anybody help me with this issue, please? I would really appreciate it!

Thank you in advance,

Frank

0 votes

For each respondent...

1. Within each attribute, compute the mean utility. Within each attribute, subtract the mean utility from each utility (this zero-centers the utilities within each attribute...which often doesn't have to be done since they are often already zero-centered in their raw form).

2. Then, for each attribute compute the difference between best and worst utilities. Sum those across attributes.

3. Take 100 x #attributes and divide it by the sum achieved in step 2. This is a single multiplier that you use in step 4.

4. Multiply all utilities from step 1 by the multiplier. Now, the average difference between best and worst utilities per attribute is 100 utility points.

1. Within each attribute, compute the mean utility. Within each attribute, subtract the mean utility from each utility (this zero-centers the utilities within each attribute...which often doesn't have to be done since they are often already zero-centered in their raw form).

2. Then, for each attribute compute the difference between best and worst utilities. Sum those across attributes.

3. Take 100 x #attributes and divide it by the sum achieved in step 2. This is a single multiplier that you use in step 4.

4. Multiply all utilities from step 1 by the multiplier. Now, the average difference between best and worst utilities per attribute is 100 utility points.

this doesn't work - I don't get the same values and the Sawtooth Ouput average differences do not sum to 100

Dear Monica,

Please call our tech support line at 801 477 4700 or Kenneth Fairchild, and we will step you through it. Converting raw utilities to zero-centered diffs is usually quite straightforward if you have main effects estimation only. If you have interaction effects or have fit a linear term (a linear slope to an attribute), there is something additional to consider. But, we will be able to show you where it is going wrong for you.

Please call our tech support line at 801 477 4700 or Kenneth Fairchild, and we will step you through it. Converting raw utilities to zero-centered diffs is usually quite straightforward if you have main effects estimation only. If you have interaction effects or have fit a linear term (a linear slope to an attribute), there is something additional to consider. But, we will be able to show you where it is going wrong for you.

This works when I calculate by the formula above but again, why doesn't the sawtooth output for Individual Utilities (ZC Diffs) match or sum Average sums to 100?

I want to know exaclty how sawtooth is going from the Individual (Raw) Utilities to the Zero-Centered ones

I want to know exaclty how sawtooth is going from the Individual (Raw) Utilities to the Zero-Centered ones

Monica,

Within each respondent, zero-centered diffs finds the multiplier (and multiplies the raw utilities by this) such that the average difference between best & worst levels is 100 across attributes. For example:

Imagine the raw utilities from HB are as follows for a given respondent:

Raw Utilities:

Attribute 1:

Level 1: -2.5

Level 2: +2.5

Attribute 2:

Level 1: -7.5

Level 1: +7.5

From the raw utilities, the range of utilities is 5 for the first attribute and 15 for the second attribute, for an average across the attributes of 10. But, we want the average to be 100, so we would need to multiply by a factor of 100/10 = 10 so that the raw utilities are set on the right scale such that the average of the differences across attribute is 100. Resulting in:

Zero-Centered Diffs:

Attribute 1:

Level 1: -25

Level 2: +25

Attribute 2:

Level 1: -75

Level 1: +75

Notice now for the zero-centered diffs, the difference for attribute 1 is 50 and for attribute 2 is 150. Their average is 100.

But, when we average such utilities across people, the differences in utilities for the mean population utilities may not necessarily reflect the same 100 average difference. That's because not all respondents have utility preferences running the same way, and differences in opinion for some attributes (like brand or color) can tend to cancel each other out when taking averages across people; but an attribute like price where most people agree regarding the direction of preference will not tend to cancel out.

So, the property that the average range across attributes is exactly 100 holds at the individual level, but not necessarily when examining a table of average "zero-centered diffs" across people.

Within each respondent, zero-centered diffs finds the multiplier (and multiplies the raw utilities by this) such that the average difference between best & worst levels is 100 across attributes. For example:

Imagine the raw utilities from HB are as follows for a given respondent:

Raw Utilities:

Attribute 1:

Level 1: -2.5

Level 2: +2.5

Attribute 2:

Level 1: -7.5

Level 1: +7.5

From the raw utilities, the range of utilities is 5 for the first attribute and 15 for the second attribute, for an average across the attributes of 10. But, we want the average to be 100, so we would need to multiply by a factor of 100/10 = 10 so that the raw utilities are set on the right scale such that the average of the differences across attribute is 100. Resulting in:

Zero-Centered Diffs:

Attribute 1:

Level 1: -25

Level 2: +25

Attribute 2:

Level 1: -75

Level 1: +75

Notice now for the zero-centered diffs, the difference for attribute 1 is 50 and for attribute 2 is 150. Their average is 100.

But, when we average such utilities across people, the differences in utilities for the mean population utilities may not necessarily reflect the same 100 average difference. That's because not all respondents have utility preferences running the same way, and differences in opinion for some attributes (like brand or color) can tend to cancel each other out when taking averages across people; but an attribute like price where most people agree regarding the direction of preference will not tend to cancel out.

So, the property that the average range across attributes is exactly 100 holds at the individual level, but not necessarily when examining a table of average "zero-centered diffs" across people.

Traditionally, one would use a utility and calculate the score as 100*(exp(utility)/(1+exp(utility)). Does Sawtooth/Lighthouse Studio provide that? All these are closely correlated, but is there a way to validate which one is the best approach: 100*(exp(utility)/(1+exp(utility)), Zero-Anchored Interval Scale, Probability Scale, or raw utilities?

...