# piecewise price function - breakpoints

I can think about choosing those points to divide a price range to some quantiles (of my experiment price distribution), or to distinguish between price sensitivity of lo vs hi quality brands. And of course there can be some analytically derived breakpoints. Do you have any practical guidelines for choosing break points for the piecewise price function in CBC or ACBC?
edited Aug 5, 2013

+1 vote
I like to run the "Get Prices" routine to see the distribution of prices shown to respondents across all the product concepts shown.  That gives me an idea of how many data points I have between breakpoints (possible limitations if some regions are too thin).

Let's imagine the price range ran from a low of \$1,000 to a high of \$2,000 for the total range of prices to cover.

I might first run a model (constraining price to be negative) with cutpoints at \$1200, \$1400, \$1600, \$1800 (in addition to the endpoints of \$1000 and \$2000).  Then, I'd plot the average utilities across the sample and also record the average RLH across the sample.  I'd look for any points along the function where there were non-linear "elbows".

Then, I might run a second model (again constraining price to be negative) with cutpoints of \$1300, \$1500, \$1700, and \$1900.  I'd again look at the RLH and examine the average plot of utilities for elbows.

If I was curious about the \$1100 price point, I might repeat again with cutpoints of \$1100, \$1300, \$1500, and \$1700.

I do this procedure to look for points along the continuum that might indicate non-linearity.  I also see how much the RLH increases or not by running these different models versus a strictly linear price coefficient.

If I find a definite elbow, then I would include that as a cutpoint.  When I see mostly linear trending, I don't try to fit a cutpoint within that region of linear price response.

You might wonder why I don't just run with all cutpoints on the \$100 in one run: \$1100, \$1200, \$1300, etc.  Well, I think that might be overfitting with too many parameters to fit with the model...but so much depends on how many respondents and how many choice tasks you have per respondent.

But, I'm not so sure my approach is best.  It has weaknesses for sure.  For example, I'm only looking at average plots of utilities across the sample.  It's quite possible that segments of respondents have different points of non-linearity (different elbows).  And, my approach of aggregating across the entire population with the plot would mask that.

If others have better suggestions, please chime in.
answered Aug 12, 2013 by Platinum (152,955 points)