Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

what to do when current customers do not choose their product in the experiment and in the simulator?

Dear conjoint experts,

in a recently fielded MBC study we surveyed a large number of current prescribers to a newspaper (next to other respondents with no product experience). In the choice tasks only ~40% of these prescribers actually chose the product that they are currently receiving at the current market price.

Problem 1:
One could argue that the paper is just not that attractive (anymore) so that - given the choice - prescribers would opt out of their prescription. However, our goal is to simulate the effect of a change of the product portfolio relative to the current status quo. Hence, ideally all of the current prescribers should also be simulated to chose their current product.

How can I achieve that?

I can (brutally) adjust aggregate market shares of the status quo scenario but that would everything else but the observed and modelled choices from the actual data... Furthermore, the client would like split analyses of current prescribers and non-customers, so technically I would have to adjust individual choices (or segment shares) rather than aggregate market share.

Problem 2:
Price sensitivity conveyed in the (unrealistic) choices and reflected in the choice models and simulation results is unreasonably high. The simulation suggests massive loss of customers in case of $1 price increases which is simply not valid in the light of historic data about past price increases. We have also surveyed price knowledge and perceived fairness, and the single linear price coefficient does not reflect this information of relative inelasticity.
While we want to recommend a price increase that (as we are certain) would not have a big negative impact on prescribers, the simulation undermines our better judgement.

Does anyone have suggestions how to approach this situation?

From my understanding I can either change/manipulate:
a) the actual choice data
b) the model specification
c) the estimation
(and in the simulator)
d) the estimation results (i.e. individual parameters)
e) the resulting shares (to affect either share of preference market shares or first choice individual predictions)

I am greatful for any advice, workaround or even story to justify this indigestable results to our client. Feel free to contact me via IM.

Thanks in advance,
Alex
asked Nov 16, 2012 by alex.wendland Bronze (2,005 points)
edited Nov 16, 2012 by alex.wendland
Not sure this is relevant to your situation, but typically in these experiments we ignore a large number of other factors that often impact people's preferences.  Salesforce effectiveness, distance to the store, warranties, etc. often are not modelled in conjoint studies, but can help explain big differences between people's stated preferences and their actual actions.  In your case, it may be that the simulator does a very good job modelling preferences (people don't really want the paper), but that your salesforce and your cherub-faced delivery boys win their hearts and their wallets at an unexpected rate.  If people are really saying that they don't want your paper, yet continue to subscribe at a high rate, then kudos to your sales staff and front page headline editors!

1 Answer

+1 vote
Alex,

I don't know the context of your choice questions, but I think another culprit could be that your questionnaire asks about a world in which your respondents make (repeated) choices among competing alternatives; in the real world, inertia has a lot to do with why people keep subscriptions, insurance carriers, banks and utility providers.  This could easily cause a disconnect between modeled and actual shares.

For this reason, the least violent thing you might be able to do is to model the status quo and then rather than compare the absolute predicted preference shares (which aren't the same as the  real market shares) model percent changes in share.  So if your simulation of the status quo shows your client with a 20% share and a second simulation shows that share growing to 22%, you might  report to your client that the simulated change adds 10% to simulated volume.
answered Nov 16, 2012 by Keith Chrzan Platinum Sawtooth Software, Inc. (60,425 points)
A book called Nudge has a section on how powerful the status quo is, and why magazine companies among other businesses offer free trials that you have to exert effort to cancel, rather than exert effort to continue.  It's entirely likely that, presented with a fresh choice, many current subscribers would indeed not subscribe.
Dear Brian and Keith,
both your comments were right on. Thank you for the impuls. All the results are actually amazingly consistent once the simulator is adjusted. After all 80 % hitrate also means 20 % error.
Colleagues working with the rest of the survey data (outside of MBC) confirmed that a third of the sample stated in explicit questions that they would not subscribe again. I was also able to exclude a significant share of the "None"-choosers due to inconsistent behavior throughout the survey.
From there, I disregarded the "None"-choosers in the status quo and calculated changes only relative to the status quo chooser-segment.
With regard to the price sensitivity I switched the simulator from share of preference to first choice and modified predicted choices according to other self-explicated wtp questions in the survey and the aggregate churn with historic data from previous price increases.
In the end our simulation results yielded very plausible reactions very consistent with our clients experience.
...