Q1: Is there a best practices method for working with a study whose design takes a full day (in my case, 16hrs) to generate?
Q2: For the study below, is 16hrs on a single machine a typical amount of time? Why is this taking so long?
I have a CBC study:
* Tasks: 20 random tasks
* Concepts: 4 concepts per task + an outside option
* Attributes: 10 attributes per concept
* Levels: About 6 levels per attribute (well-balanced across attributes)
* Versions: 30 survey versions
* Random task generation method: Balanced Overlap
* Everything is standard/default
It takes 16hrs to generate the design for this study! (on a single Intel i7, 2 cores, 16GB RAM machine using SSI Web 8.4.8). Am I doing something wrong? Does this surprise anyone?
My workflow approach: I basically worked on the study, where I reduced the number of tasks to 3, used "shortcut" for random task generation method, and cut the number of survey versions down to 1. In this way I was able to generate designs quickly and iterate. Now that I'm comfortable with what I've created and am ready to host my survey, I'd like to generate the full design, as above. But this takes 16hrs. I'm worried that if I've made a mistake, then I have go back and take another 16hrs to run this.