Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

RFC Simulation with Interpolated Prices and DR None On vs Off

I have utilities data from an CBC/HB study that included discrete price levels, four four-level attributes, and two binary attributes (Features X and Y: Yes/No) and a dual-response purchase/would not purchase question.

I have uploaded the data to the Online market simulator and run RFC, scale set to 1.0. In order to compare a "base" product with certain attribute levels and without Feature X at price 1 (not a tested price level) to an "enhanced" product that is identical to "base" except that it includes Feature X at price 2 (a price higher than price 1), I set price as a continuous variable and interpolated the price points I needed. I run the RFC with the following scenarios:

Scenario 1: Base at $P1 vs Enhanced at $P2, None UNCHECKED, such that SOP is equivalent.
Scenario 2: Same as S1 but with None CHECKED. None is over 80% SOP and product SOPs are not equivalent.
Scenario 3: None CHECKED, Base at $P1, Enhanced at $P3 such that SOPs  are equivalent. None still over 80%, but product SOPs are equivalent.

Questions:
1. For this data, should I run RFC with or without None?
2. If WITHOUT, is the difference between $P2 and $P1 properly interpreted to be the price premium associated with Feature X?
3. If WITH, what is the interpretation of the None SOP, and what is the proper interpretation of $P3 in this context?

Thanks SO MUCH in advance, Sawtooth braintrust!
asked Oct 29, 2018 by CJ (125 points)

1 Answer

+1 vote
I wouldn't use RFC for this type of simulation, where there are only two competitive products in the simulation.  In that case, RFC is just make you wait more time and there is no opportunity to correct for product similarity.  

So, I would prefer to use the Share of Preference market simulation method (this is the familiar logit equation).

Generally, if conducting simulations in this way to derive WTP, I would like to have as large and realistic competitive set as I could.  That would include the product I'm testing plus all relevant (and large enough competitors to worry about) plus the None.

None at 80% is a pretty big share of the vote that is being given to the None.  This might make we worry that I had enough sample size to stabilize the WTP, since only 20% of the vote was being cast toward the base product vs. the enhanced version of the base product at the higher interpolated price.
answered Oct 29, 2018 by Bryan Orme Platinum Sawtooth Software, Inc. (164,615 points)
Thanks Bryan!  I'm glad my real name isn't on this so I can say I'm having a choice modeling fangirl moment. LOL

This is what I suspected, but it's good to have confirmation from you. Sample size here is just over 450, so around 90 in the 20% "purchase" group. The SOP simulation returns nearly identical output to RFC for the scenarios described above.

Feature X is also a feature that would not have been salient to respondents prior to taking the survey, so my sense is that our results indicate this feature (as described in the survey) is too low-impact to support any reliable price premium estimates. Importance for Feature X was the lowest of all tested attributes.

If you have time and inclination, could you suggest alternative means to estimate a potential price premium associated with a binary attribute? Or is it even a worthwhile exercise given the low importance?

Thanks again!
I don't think I clarified this, but I think it's generally better to have the None in there to siphon off the votes for respondents who wouldn't want to be forced to choose between either of the test products (they're not in the market).   But, when the effective sample size of those votes for the two test products is reduced beyond about 200, I'd start to get nervous about what kind of precision I was obtaining for that WTP.

A worry with conjoint analysis for feature valuation is that by placing an unimportant attribute in the survey, the researcher may end up biasing the importance of that feature upwards relative to more important attributes.  The conjoint survey can make people aware of that feature when in the real world it may never rise to the surface to be considered in the tradeoffs.  I don't think it's an issue of whether the attribute is binary or not; just whether all the major attributes have been included in the study as well and whether that unimportant binary attribute has artificially been given a salience boost through the introductory screens or by virtue of the way it is represented in the conjoint questions.
Thanks so much! One more question. I'm attempting to replicate third party RFC simulations, and I suspect they used the desktop software rather than the online simulator, which I understand use different formulas (e.g., Gumbel error) and therefore produce different results.

Assuming an RFC simulation with Product A, Product B, and None (which I know we don't like, but it's what they did, so...).  Product A does not have Feature X and at Price 1, Product B has Feature X at Price 2, otherwise identical. Source reports Price 2 is $50 more than Price 1 to achieve equivalent SOP, ergo the Price Premium for Feature X is $50.

In the online RFC, I get the same result but at a $25 Price Premium for Product B. For a different product, I also get a lower implied premium, but only by a few dollars.

Why would this be? Is this entirely due to the differences in formula between the two systems? I know there are also more settings in the desktop software that Source may have not reported. Or, is there some fundamental flaw in the methodology that makes it more sensitive to the different calculations?

Thanks in advance!
...