Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Is there a straight forward way to create holdouts in ACA?

I'm not particularly familiar with ACA but I'd like a way to better understand the validity of an individual's choices - mainly to try and detect when folks are really just randomly answering instead of responding to stimuli.  

I do not see a straight forward way in testing the fit or equivalent of hit rate in ACA.  I suppose I could make a fake ACA question and record the response and then look at what the model would predict for that individual.  

If there is a better or more established way with aca to do this please point me to the relevant information.  Most of the time I find  the answer is right in front of me but I'm looking past it.  Thanks!

Cheers!
asked Jan 15, 2013 by salinmooch (180 points)

1 Answer

+1 vote
In SSI Web, it's possible to insert questions anywhere during the ACA questionnaire.  So, it would be possible to insert a holdout pairwise ACA conjoint question within the ACA Pairs section (formatted just like ACA Pairs).  You'd use the Free Format question type within SSI Web to script that up (borrowing the HTML from another ACA Pairs question (with some additional tweaks needed).

Or, you could add holdout CBC-looking choice tasks after ACA questions, using Free Format questions (or CBC Fixed Tasks, by inserting a CBC exercise after the ACA exercise).

Also, if you are using ACA/HB software to compute ACA utilities, you can set it into the mode of only using the ACA Pairs information to compute utilities.  This will give you an R-Squared for each individual that is based on the internal fit of the utilities to each individual's ACA Pairs questions.  But, why not also leverage the Priors ratings in ACA/HB as constraints...and ACA/HB will give you an R-squared that incorporates the consistency of responses across ACA Priors and Pairs questions, giving you an even more complete view of respondent internal consistency.

The standard "R-Squared" that ACA uses to compute the consistency  between the early parts of the ACA survey (Priors + Pairs) vs. the Calibration Concepts (purchase intent questions) I don't place as much confidence in.  It's quite possible that folks just don't understand the Calibration Concepts questions very well, so that can foul up the R-Squared measures, even though a person was quite consistent in the early aspects ot the ACA questionnaire.

Just some ideas to chew on.
answered Jan 15, 2013 by Bryan Orme Platinum Sawtooth Software, Inc. (138,915 points)
Thanks Bryan,

I have already ruled out using the Calibration concepts for testing internal validity - my pilot folks who tend to be older disabled patients just did not get it - that is why I was looking at other apporaches to check this.  

I like the idea of CBC approach to test validity, but I'm afraid of making the survey much longer at this point.  I think I'll stick with using the priors as constraints and examine the R-squared - as well as add a holdout in the form of an ACA pairs question. I'll give adopting a ACA question a try - what would be the additional tweaks that would be needed?

I apologzie for being dense but is there info on how the r-squared used in the HB estimation is calculated?  I assume a larger number is better where I see random answers results in r-squared of 60 and "consitent answes" results in r-squared of 600 or more, but I'm not sure what threshold is for a fit to be good based on this.  I could not find an answer in the ACAHBtech white paper. Anywhere else I could look?

Cheers!
...