After you have specified attributes, any a priori ordering, how many tasks to ask, and whether to use "single" or "pairwise" concept tasks, you are ready to generate a design. The design specifies the combination of attribute levels (profiles) shown in each conjoint question, for a single or multiple versions of the questionnaire.
To generate a design using the defaults, click Generate Design.
Number of Tasks (Questions)
Based on the number of attributes and levels in your study, CVA provides an initial recommended number of tasks (conjoint questions). The recommendation is based on asking three times as many tasks as parameters to be estimated, where the number of parameters to estimate is equal to:
Total number of levels - number of attributes + 1
The recommended number is an ideal number from a statistical standpoint that often is not used in practice. The recommended number is often more than respondents can reasonably complete. Use this as a guideline, then specify how many tasks you actually want to use within the software.
One of the most important decisions with a conjoint design is how many questions to ask. If you don't ask enough questions, it may result in noisy part-worth estimates. If you ask too many questions, you may overtax your respondents, leading to decreased data quality and/or abandoned surveys. CVA warns you if you do not ask at least 1.5x as many tasks as parameters to be estimated, and does not let you ask fewer tasks than the number of parameters to estimate. See Selecting the Number of Tasks for more information.
Randomize Attribute Position within Concepts
Randomize Attribute Order
Randomize Attribute Order specifies whether to present the attributes in random presentation order within a concept. If this is selected, the attribute list is randomized once per respondent, and all tasks within a respondent's interview will display the attributes in that given order. This can be useful to control order effects across respondents.
First Randomized Attribute
The first attribute in the range of attributes to be shown in random presentation order. Specify a "1" if you wish all attributes to be randomized. If, for example, you want the first and second attributes always to appear as the first two attributes in a product concept, specify a "2."
Last Randomized Attribute
The last attribute in the range of attributes to be shown in random presentation order. Specify the last attribute number if you wish all attributes to be randomized. If, for example, you had a total of five attributes in your study and you wanted the fourth and fifth attributes always to appear as the last two attributes in a product concept, specify a "3."
When you click Show Advanced Settings, the following fields are available:
Number of Versions (default=10): A version of the questionnaire represents a single series (block) of conjoint questions. If you want respondents to receive different sets of questions, you can request multiple versions (up to 10). If you are conducting paper-and-pencil studies, you probably do not want to manage more than a few different versions because of the increased hassle of dealing with unique questionnaire versions. However, when you are able, additional questionnaire versions decrease psychological order and context effects and thus improve your overall results. See the section below entitled "A Single Version or Multiple Version?" for further guidance.
Design Seed (default=1): CVA's design generation algorithm requires a starting seed. You can use any integer from 1 to 9999. If you repeat the analysis using a different starting seed, you will usually obtain a slightly different (sometimes better) result.
Throw out Obvious Tasks (default = yes): CVA can exclude product concepts that are clearly better or worse than others from the questionnaire. To use this, you need to specify that certain attributes have a priori preference order.
Task Pool Multiplier (default=10): When generating a questionnaire version, the Task Pool Multiplier (multiplied by the number of requested tasks) controls how many unique tasks will be used in the pool of candidate tasks to include. For example, if you request 18 tasks with a Task Pool Multiplier of 10, 180 unique tasks will be searched among to find an optimal 18 tasks.
Version Pool Multiplier (default=10): This defines how many tries (passes) will be attempted from different starting points. For example, if you are requesting 10 questionnaire versions and the Version Pool Multiplier is 10, then 100 attempts will be made to find optimal versions of the questionnaire. The top 10 versions (in terms of design efficiency) will be used in the final plan.
Hints: The defaults we've specified tend to work well and quickly. But, you can often improve your questionnaire's design efficiency by asking CVA to try harder. You can increase the Task and Version Pool Multipliers to search deeper and longer for better solutions. You can also change the random design seed to repeat the process from different starting points.
CVA Design Procedure
The conjoint design is a critical component to the success of any conjoint project. Attributes must vary independently of each other to allow efficient estimation of utilities. A design with zero correlation between pairs of attributes is termed "orthogonal." Level balance occurs if each level within an attribute is shown an equal number of times. Designs that are orthogonal and balanced are optimally efficient.
In the real world, it might not be possible to create a perfectly balanced, orthogonal design for a particular set of attributes and prohibitions consisting of a reasonable number of tasks. The CVA approach produces high quality designs automatically, although they probably will not be perfectly balanced or orthogonal. CVA lets you test the efficiency of a design before fielding your study. Testing the design also lets you study the impact of including prohibitions, or asking fewer than the recommended number of questions.
CVA provides an easy-to-use tool for generating well-balanced, "nearly-orthogonal" designs. CVA generates a pool of potential conjoint questions (from which the final design will be chosen) using a relatively simple procedure. If the total number of concepts to generate in the pool is greater than or equal to the number of total possibilities, a complete enumeration is done of all non-prohibited possibilities. If the pool represents a subset of all possible combinations, for each question to be composed CVA does the following: For each attribute it picks a pair of levels randomly from among all permitted pairs that have been presented the fewest times. A random decision is made about which levels will appear on the left and on the right (for pairwise designs). No pair of levels will be repeated until all other permitted pairs have been shown. Each pair of levels will be shown approximately the same number of times, and each level from one attribute is equally likely to be shown with any level from another attribute.
CVA's designer uses the following steps to select efficient designs, given the questionnaire specifications:
1) CVA generates a pool of potential conjoint questions equal to, by default, 10 times the requested number of questions (assuming that many unique questions exist).
2) The D-efficiency of the design is calculated for the pool, excluding one conjoint question at a time. The one task that contributes least to the efficiency of the design is discarded, and the process repeated until the desired number of tasks remains. (See Technical Notes about the CVA Designer for more information about D-efficiency).
3) CVA then examines every potential 2-way swap of conjoint questions that remain with those that were discarded or are available in the pool of potential conjoint questions. CVA swaps any pairs of questions that result in an increased efficiency.
4) Next, CVA examines the frequency of level occurrences for each attribute. It investigates changing levels that are over-represented in the design to levels of the same attribute that are under-represented. Any changes that result in improved D-Efficiency (and are not prohibited) are retained.
5) For pairwise designs, CVA flips left and right concepts to improve the left/right balance of the design.
CVA repeats steps 1 through 5 multiple times and selects the best n solutions (where n is the total number of versions of the questionnaire you'd like to use). CVA uses re-labeling (trading pairs of levels within attributes across all tasks within a design) to create approximate aggregate level balance across all versions. (Relabeling doesn't change the efficiency of each individual design.) An additional check is performed to ensure that no two versions of the questionnaire are identical.
A Single Version or Multiple Versions?
Previous versions of our CVA software employed a single version (block) of the questionnaire design. Each respondent received the same questionnaire (though the order of the tasks could be randomized). From a statistical standpoint, a single version of the plan was all that was typically needed to ensure precise estimates of main effects. However, researchers recognize that respondents are human and therefore the quality of the estimated utilities will depend on controlling psychological/order/context effects. Randomizing the order of the tasks is a good way to control for context effects. However, these effects may be further reduced by using more than one version of the questionnaire.
In addition, CVA/HB for hierarchical Bayesian analysis of part-worth utilities would seem to benefit from multiple versions. Because HB pools information across respondents, having greater variation in the design matrix (across all respondents) may improve individual estimates due to improved population estimates.
Multiple versioning is nothing new for users of Sawtooth Software's tools. In our CBC software, the default (when using computer interviewing) is to use 300 versions of the questionnaire. In reality, this is probably overkill, since the vast majority of the benefits of multiple versioning are captured after the first few versions. But, since CBC's algorithms for developing designs are generally fast and computer interviewing automatic, it is as easy to develop and field 300 versions as 4. With CBC, there is another good reason for including multiple versions of the questionnaire: choice-based designs can be stronger than traditional conjoint for estimating interaction effects, and additional versions of the questionnaire help stabilize those effects (assuming pooled estimation).
With CVA, there are benefits for including multiple versions of the questionnaire (to help reduce psychological effects), but there are reasons to believe that employing a very large number (such as 300) is not necessary.
|1.||CVA's search algorithm tries to find the optimally efficient set of questions given a requested number of tasks. In contrast, CBC uses a "build up" approach that while achieving very good design efficiency, doesn't attempt to find the optimal set of questions given a requested number of tasks. Therefore, it can be expected that CVA is better suited to choosing a smaller number of versions that are quite effective statistically.|
|2.||CVA only estimates main effects (a single utility value for each level in the study). CBC can estimate both main effects and first-order interactions. The goal of estimating main effects suggests requiring fewer questionnaire versions.|
|3.||CVA is generally a slower (and more thorough) way to generate designs than CBC. Generating a very large number of versions by default (such as 300) could take a significant amount of time.|
Because of these arguments, CVA permits up to 10 different versions of the design. In our opinion, the benefits for using more than 10 versions with CVA would be miniscule. For two decades, previous versions of CVA have used single-version plans, with generally good results. Allowing a few additional versions is convenient when interviewing via computer, and should lead to slight improvements.
When fielding multiple versions of the questionnaire, it is desirable (but in nowise a requirement) that each version of the questionnaire be completed by approximately the same number of respondents. When fielding your study on the Web, this happens automatically. Each respondent starting a CVA survey is given the next available design. Once the last design is reached, the next respondent receives the first design, and the process repeats.
We have stressed the importance of including enough conjoint questions to ensure efficient designs. However, design efficiency is not the only reason for including two to three times as many questions as parameters to be estimated. All real-world respondents answer conjoint questions with some degree of error, so those observations beyond the minimum required to permit utility estimation are useful to refine and stabilize utility estimates.
While there is generally a positive relationship between the number of conjoint tasks in the design and D-efficiency, there are many exceptions. It is possible for designs with few or no degrees of freedom to have 100% D-efficiency. This means that this design is optimally efficient for estimating main-effect parameters. But this assessment ignores human errors. By increasing the number of tasks, you provide more opportunity for respondent errors to cancel themselves out. Increasing the number of tasks in this case (beyond the saturated orthogonal plan) will slightly reduce the reported D-efficiency (from 100% to something slightly less than 100%), but the precision of the estimated parameters may be significantly improved due to a greater amount of information provided by each respondent.
To summarize our point, don't focus solely on D-efficiency. Good CVA designs foremost should include enough conjoint questions relative to the number of parameters to yield relatively precise estimates of part-worths. Given an adequate number of conjoint questions, we next focus on selecting a design with high D-efficiency.
Importing and Exporting Designs
You may import or export designs, as described in the section of this documentation entitled Importing/Exporting CVA Designs. This is useful if you need to use a particular set of questions developed by a colleague, using a different piece of software, or even from a design catalog. Importing designs is also a way to add user-specified holdout questions.