# How to interpretate the different HB Estimation results?

Hi there,

After completing my ACBC survey, I now want to get the attribute importances. I did the HB estimation three times: with price as linear, log linear, and piecewise. For linear and log linear, I only entered the min and max price, while for piecewise, I entered these two plus three additional prices inbetween the min and max prices.

The strange thing is, that the resulst for the attribute importances vary in each method.

Example:
Linear: attribute a is ranked 5th (4.73%),
Log Linear: attribute a is ranked 3rd (15.45%),
Piecewise: attribute a ranked 7th (3.91%)
(Including price, I have 7 attributes)

I'm very confused from my results and have trouble to interpretate them. Can you help me? When I do my analysis, which results do I have to use? The ones from linear, log linear, or piecewise?

I googled for university thesis to check how they interprete their data, but that didn't help either. I can't find something like a document that shows the step by step the standard approach on how to interprete the data. I understand the general difference between linear, log linear and piecewise price functions. I just don't know what to do with my results.

Thanks a lot for helping me.
retagged Feb 23, 2016

+1 vote
This sounds not right.  I'm worried that you have not kept your range of prices consistent across the runs.

For example, let's imagine the range from lowest possible to highest possible prices is \$1,000 to \$5,000.

For the Linear Price model, you should go to the Attribute Coding tab and then click the Price attribute to reveal the panel to the right (Price Values) that has 30 cells in it.   If \$1,000 to \$5,000 was the total theoretical range of prices, then you should have a \$1,000 in cell 1 and a \$5,000 in cell 2.

Now, when you do your log-linear price model, you need to represent more prices along the continuum in the Level Values table to make sure that the "bend" from the log-linear curve makes its way into the market simulator.  So that can happen, you must specify additional price points in between \$1,000 and \$5,000.  So, I'd recommend you use at least 5 price points in the Price Values table: \$1,000; \$2,000; \$3,000; \$4,000; \$5,000.  Change the coding method for the attribute to Log-Linear.  Now, run HB again.

For piecewise, you again need to specify more than 2 price levels in the  "Price Values" table.  Again, you could use \$1,000; \$2,000; \$3,000; \$4,000; \$5,000.  Change the coding method for the attribute to Piecewise.  Run HB again.

If you continue to see significant difference in your price importances under the different coding schemes, you really should send your project into our tech support group and let them step through this with you.
answered Feb 24, 2016 by Platinum (152,955 points)
Thank you.
I did the calculations as you recommended. Now there are only slight differences between linear, log linear, and piecewise (attribute a: 5.59% / 6.44% / 4.27%) ...

Looking at all attributes, between linear and log linear, the percentages changed as well, but the changes did not influence the ranking. However, the attribute ranking for piecewise is slighly different: attributes of rank 5 and rank 6 switched places. Is that normal?
Glad you were paying attention and seemed to find the problem.  Your attribute importances seem much more stable across the different runs.  By specifying your price function with different shapes, and with individual-level estimation via something like HB, you will find a little bit of difference across different models.  These differences can lead to rank-order changes in the importance of attributes across a list of attributes, but the metric change in utilities or importances shouldn't be dramatically big.
Hi Bryan,

I did have a similar issue for linear and loglinear, i.e. varying results. I made a rerun using your suggestions (2 price points for lin, 7 for loglin), but the two loglin resutls had not changed. I guess it still makes sense though,  as particularly the one attribute with strongly varying level prices (making up between 1-500% of total price depending on the choice of other levels) gained in the loglin option in relative importance  and price lost relative importance (see below). How would I decide between the two results, which one explains my conjoint better? Does the distribution of winning prices (BYO and Tournament)maybe give a hint to that? Would it even make sense to add a piecewise function and if so, where would I get the different slopes from? Maybe also winning prices(BYO and Tournament)? Or is the general distribution of prices a better indicator as due to the near-neighbour concept they are distributed to acceptable prices anyway?

Lin price function (relative importance):
Average Importances    Average Importances
Aest    5.66354
Qual    17.53030
CtryProd    6.81875
Fib    6.50511
NatFib    5.45927
ProdProc    6.66874
Work    10.15803
Price    41.19626

Loglin price function (relative importance):
Average Importances    Average Importances
Aest    6.00049
Qual    18.82615
CtryProd    10.58680
Fib    7.01233
NatFib    5.91111
ProdProc    7.19086
Work    10.79543
Price    33.67682
The two log-linear models (one with just the two extreme levels versus another with 5 interior points for the purpose of sending additional levels along to the market simulator) will produce the same results in terms of model fit and the attribute importances from the utility estimation.  Nothing has changed for the purpose of those calculations.  The only thing that adding more interior points does for log-linear coding is make sure that the "bend" of the curve gets communicated forward to the market simulator.

If you are comparing two HB runs, one with linear and the other with log-linear, then I would just pay attention to the fit statistic and choose whichever gave the better fit (assuming ALL other settings are held constant in the HB model).  This is one of those rare times in HB that paying attention to internal fit to choose between models could work.  That's because the number of parameters estimated for price is exactly the same (one parameter).  All the other attributes are set the same way.  All the other HB settings are the same.  So, we can just isolate which model produces the better fit.