Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

HB in WTP-Space or using the beta-draws for individual WTP-distributions?

Hello, everybody,

I am currently evaluating two choice experiments and have read a lot of information about different estimation models. During my research I came across the possibility to estimate e.g. Mixed Logit or G-MNL models directly in WTP space instead of doing the estimates in the preference space. Now I asked myself if this is also possible for HB models? And if so, how?

My second question refers to the beta drawings in HB estimation process. The individual beta parameters are the average values of the drawings after the burn-in phase, e.g. averaging the last 2000 drawings. Now I wonder if it makes sense to use the distribution of these drawings instead of the mean value in the WTP calculations. So I could again make a drawing from the 2000 drawings, and I could do this for both the price coefficient and an attribute coefficient. The ratio of the price coefficient to the attribute coefficient would result in the WTP estimate for this one drawing. The whole thing could be done many times, for example 20,000 times. As a result, I would not have one WTP value per individual, but a distribution of the individual willingness to pay. In this way, the uncertainties from the drawings would be included in the WTP estimates.

What do you think?

Many greetings
Nico
asked Jan 31 by Nico (400 points)

1 Answer

+2 votes
Nico,

Keith will likely jump in later today to respond to your first point about estimating models in WTP space rather than preference space...but let me give you some thoughts about using beta draws to better account for uncertainty.

A few years ago there was discussion about proper estimates of confidence bands from HB results at our conference.  If my memory serves, Greg Allenby and Tom Eagle argued for using the variance in the upper-level parameters (what we call the "alpha draws") rather than leveraging the lower-level betas.   If I remember my conversations with them, I think they opined that using lower-level beta draws would overstate the uncertainty at the population level.  We know that doing frequentist confidence intervals on the point estimates (the typical practitioner's approach) is not statistically proper (not true to the Bayesian approach), but rather an approximation of confidence bands for the population.

Of course, you can try different approaches across a few of your data sets and see what kinds of differences in confidence bands you are getting.

And, I cannot help but point out that the method of estimating WTP from the utility coefficients will tend to overstate WTP compared to the approach that we believe is more realistic and reasonable that involves simulating changes to a feature in the test product (for which WTP is to be gauged) against relevant competition and the None.  This is described at: https://www.sawtoothsoftware.com/download/techpap/monetary.pdf
answered Jan 31 by Bryan Orme Platinum Sawtooth Software, Inc. (172,790 points)
Nico,

I agree with Bryan that the better way to estimate WTP is the simulation-based method described in the paper he linked you to.  

The ratio method you describe would, I believe, resemble what folks do to estimate their mixed logit models in WTP space.

You might also want to check out a really useful paper about WTP (I think it refers to the ratio method as "pseudo-WTP"):  

Allenby, Greg & Brazell, Jeff & Howell, John & Rossi, Peter. (2014). Economic valuation of product features. Quantitative Marketing and Economics. 12. 421-456. 10.1007/s11129-014-9150-x.
Hello, Bryan, hello, Keith,

as always, thank you for your advice. The paper also provides interesting insights into different methods of estimating willingness-to-pay and marketing decisions.

@Bryan:
Do you know any literature on how to determine confidence intervals in HB estimates, i.e. a quasi-written version of your conversation about the alpha-draws?

I'm also wondering if, as an alternative to bootstrapping, i.e. repeatedly drawing from, say, the last 2,000 iterations of the HB algorithm, you can simply do more iterations? What exactly do I mean by that?
Variant 1 (already described): I perform 20,000 iterations and take the last 2,000 iterations for the average beta coefficients. So I make the assumption that these last 2,000 iterations correspond to the true population of beta parameters and I repeatedly draw from them and calculate the average beta coefficients. This results in a confidence interval for the mean value of the parameter estimators and thus also for the WTP estimates.

Variant 2: I perform e.g. 110.000 iterations, where the first 10.000 iterations are the burn-in phase. After that, I calculate the average beta parameters for every 2,000 draws, i.e. a total of 50 (=100,000/2,000) mean value calculations. This also results in a range of values, which - at least theoretically - can be used as confidence intervals.

Whether one of these two variants makes sense (more or at all) is of course a different matter.
check out...

"A META-ANALYSIS ON THREE DISTINCT METHODS USED IN MEASURING VARIABILITY OF UTILITIES AND PREFERENCE SHARES WITHIN THE HIERARCHICAL BAYESIAN MODEL" which made be found at:

https://www.sawtoothsoftware.com/download/techpap/2018Proceedings.pdf

My understanding is that the lower-level beta draws are appropriate for finding confidence bands for individuals' parameters, not the population parameters.
Hello, Bryan, hello, Keith,

Thanks again for your support. I had a few more thoughts and wanted to hear your opinion about it. In order to determine confidence intervals for willingness-to-pay at the individual level, I thought about using the beta-draws and calculating the WTP coefficient.

Suppose I had the following beta-draws (in practice, of course, there are thousands):

Attribute: 1.5, 1, 1.2, 1.4, 1.6
Price: -2.5, -2.0, -2.2, -2.1, -2.4

Now I draw for example 3 times and form the WTP coefficient for each draw:

Draw 1: 1.5/-2.4 = -0.625
Draw 2: 1.6/-2.4 = -0,667
Draw 3: 1.4/-2.0 = -0.7

The values drawn are purely random and could of course look different. There could also be many more draws. Now I take the draws and calculate the mean and standard deviation.

Mean = -0.625 -0,667 -0.7 / 3 = -0,664

Under this approach, the uncertainty in the beta draws would be incorporated directly into the WTP calculations, instead of first averaging the beta draws  and then calculating the WTP coefficient. The confidence intervals are of course relatively wide, but for me they are also more plausible.

@Keith:
When using the ratio method the models are estimated in preference space. This leads to implausible distributions of willingness-to-pay because, for example, the ratio of two normally distributed random variables is Cauchy distributed. To avoid this, models, e.g. Mixed Logit but also HB, can be formulated directly in the so-called WTP space.

See, for example: https://link.springer.com/chapter/10.1007/1-4020-3684-1_1

The R-Package RSGHB offers the possibility to write the likelihood function yourself and thus directly estimate the HB model in the WTP space without using the ratio method or a market simulator. However, this requires quite a lot of know-how from the user. I was wondering if Lighthouse Studio can do that too.
Nico,

You're right about the implausible distributions, which is why folks estimating WTP with mixed logit using the maximum simulated likelihood use constrained distributions.

You cannot specify your own utility function in Lighthouse Studio.  You're right that RSGHB allows you to do so, but I know you can also write your own utility function in Biogeme (thought the documentation there is a little scant.  

NLogit will estimate directly in WTP space and that may be a pretty good, and well-documented solution for you.  Finally, you might want to check out the new Apollo package in R, which seems to allow all manner of models.
Hello, Keith,

wow, it's amazing how I missed Stephane Hess's 'apollo' package. It seems to contain so many functions that it is daunting at first sight, but it is obviously, what I was looking for. Thanks a lot for the tip!

Now only the question with the beta-draws is still open. Does the procedure make sense or am I making a mistake in thinking?
Nico,

I know they teach a week long course on Apollo at Leads University at some point over the summer, if you're interested.  

Your beta draws idea makes sense to me, but @Bryan may want to comment on that.
...