Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

How to identify screening tasks in ACBC data in csv format?

As I'm planning some manipulations on my ACBC response data I have exported my responses as cho file, loaded that into CBC/HB to then export a csv version of the design incl responses.
While my design settings specified 8 screening tasks with 4 concepts each my generated csv has more than 32 "concept vs none" for almost all respondents. I figured it must have something to do with "unacceptables/must-haves" as I did not use any constructed lists or other restrictions.
Can somebody please advise on the structure of the cvs ACBC responses? Which of the 32+x tasks are in fact screening tasks, what are the rest?
Thanks!
asked Aug 8, 2016 by alex.wendland Bronze (2,080 points)
retagged Aug 8, 2016 by Walter Williams
Hi all, last time I had an issue related to the ACBC specific format of the .cho and corresponding .csv response format I was helped by Aaron I think. Back then I wasn't able to find any documentation on how the specific characteristics and information generated in ACBC is stored. Is there still no documentation that one can refer me to?
Thanks,
Alex

1 Answer

0 votes
Screening concepts are all coded up as the individual concept vs. the none option (when you select "not a possibility" it is treated as the none option).  So all 32 of those tasks are from the screening section of ACBC.

If a respondent selects a level as either a must-have or unacceptable then the software will go through any screening concepts they have not answered yet and auto-answer any that are not a possibility according to the must-have or unacceptable rules given by the respondent. It will then generate new concepts to replace the auto-answered concepts. So you might see more than 32 screening tasks in the .cho file.

You can find a brief description of how we code up the ACBC exercise at https://www.sawtoothsoftware.com/help/lighthouse-studio/manual/index.html?codingtheinformationforpar.html.
answered Aug 10, 2016 by Jeff Forkner Bronze (2,875 points)
Thanks for the response Jeff!
Just a (hopefully) quick follow-up:
How would I identify which of the response data screening tasks are auto-answered and which are actually explicit respondent choices? Would I have to use the regular data export of all sys_ACBC variables as reference, and if so which?
Thanks again!
Alex
...