Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Dollar Allocation Warning

I am trying to process the warning expressed earlier here on forum  about that one should not be using dollars spent as chips allocated (even if  we do them volume metric way with proper None allocation), because in principle any price attribute in the design  would NOT be truly independent.

It sounded scary at first, but what it actually means? Independent variables are called independent because they are supposed to be low correlation to each other, not necessarily to the dependent variable.  
Local dependence between price levels within an attribute does exist, but it will not change, will not inflate/deflate, because of type or format of dependent variable we chose.
Such price part-worths will be non zero correlation to each other anyway, just because of their nature. Some deal with it via linear coding, but lots of analysts still want to see price as part-worths.  And we are not saying it would be wrong in principle...
So why it would be wrong to model share of budget as volumetric CBC where each unit is simply 1$?

- What is your budget for lunch?  
- $20
- Which one of the following  would you buy?
Option1:     Meal=Burger, Price=$10.00
Option2:     Meal=HotDog, Price=$3.00
Option3:     Meal=Can of Kimchi, Price=$1.00  

- I would buy Option2
- Okay we code your answer as ...
Option2 =  15% (or 3 chips) while None gets  85% (or 17 chips)...


I understand that utilities for such price attribute in dollar allocation exercise would come weird, like highest utility for highest price, because  higher price takes away more dollars from the budget or wallet  (aka % of chips) therefore would get higher share and higher utility. But utilities would not increase proportionally to nominal price as demand would be declining resulting in lagging share of wallet.

Aside from that formal weirdness with their relative magnitudes, would these price utilities along with other attributes & utilities still come relevant predicting proper shares of wallet in simulations?
I don't see the reason why they would not.

What am I missing?
asked Jun 13, 2018 by furoley Bronze (845 points)

1 Answer

0 votes
"Aside from the formal weirdness of their relative magnitudes" is kind of an important aside.  If your relative magnitudes are funky and non-comparable, I'm not sure what value there is to a model like this, at all.  

Your prices are perfectly correlated with your allocation weights, so they are not independent of the weights, and I have no idea what they would even mean.

But if all that isn't reason enough not to do this, and you're happy with a model that predicts without being interpretable, well I guess you can try it.  I can say I tried this once and ended up not using it because I couldn't explain the results in a way that satisfied the client who insisted on this kind of design and analysis.
answered Jun 18, 2018 by Keith Chrzan Platinum Sawtooth Software, Inc. (67,150 points)
Thank you Keith. I think you are possibly referring to two independent issues:
- hard to interpret utilities for prices
- model not fitting data/ or low hit rate

You are pretty sure about 1st one but it looks like you not sure about the 2nd one. I understand you are saying we might NOT want to use the "black box" model as a predictive vehicle.

I am on this unusual route not because I want to try something new,  but because I either need to run 50 models (2 hours each) out of MBC restaurant menu situation, or to code these choices as volumetric CBC model.

But in volume metric CBC approach, determining max volume across all screens is even more questionable for this specific project, because  items shown & selected on each MBC screen are not comparable:  
"ketchup" supplement for french-fries for 0.25$ vs "green peppercorn bone-in fillet mignon" for $49.95

I would rather come up with some unusual explanation for weird price utilities, like "dragging power of wallet share", but would give the client relevant and better performing predictions in a simulator, than would scarify that in order to be operating with more familiar and intuitive utilities.

Am I fundamentally wrong?
If you and your client are satisfied with the explanation and you get better performing simulations, who am I to complain?  I'd love to hear how well this works.
Thank you Keith. I will share results but CBC/HB is running slow

Also is there any rule of thumb on fit statistics on chip allocation conjoint? As far as I understand the typical RLH definition would not be very relevant there, since we are not in a discrete response situation.  So the geometrical mean of winning likelihoods will not tell me that much since we are not predicting winners or choice

For instance I see RLH currently is at  47% but Pct. Cert. at 75%. Such a  discrepancy. Is there any reco for allocation models?
Well, we expect RLH to be lower for allocation data, but I don't have any guidelines.
Root mean squared error calculated at individual level against original responses and then aggregated across the sample?

there is no room for Hit Rate or RLH to use


..or Mean Abs Error for aggregate level results for all screens shown?
...