This article is adapted from an article entitled "Conducting Full-Profile Conjoint Analysis over the Internet" scheduled to be published in the July issue of Quirk's Marketing Research Review.
If you've visited our web site lately, chances are you've seen the experimental conjoint study we've been conducting over the Internet. The subject of the study was credit cards, and its purpose was to compare pairwise full-profile (FP) conjoint and single-concept presentation. Both types of questionnaires can be designed and analyzed using our CVA system. The conclusions we've drawn apply to all computerized FP studies, whether over the Internet, DBM or CAPI.
In a 1997 survey of conjoint analysis usage in the marketing research industry, ACA (Adaptive Conjoint Analysis) was found to be the most widely used conjoint methodology in both the US and Europe. Traditional FP conjoint was also reported as a popular method. In general, we believe traditional FP conjoint is an excellent approach when the number of attributes is around six or fewer, while ACA is generally preferred for larger problems.
FP conjoint analysis studies can be done either as paper-based or as computerized surveys (Internet surveys, disk-by-mail, or CAPI). Because they typically involve fixed designs and, unlike ACA, are not adaptive, computerized FP surveys really offer no real benefit over the paper-based approach in terms of the reliability or validity of the results. In fact, paper-based FP may work better than computerized FP. Even though computerized FP probably offers no significant benefit over paper-based surveys in terms of reliability or validity, real benefits might be realized in survey development, data collection costs, and speed.
Pairwise and Single-Concept presentation are two popular approaches for FP conjoint. With Pairwise questions, respondents make comparative judgements regarding the relative acceptability of competing products. The Single-Concept approach probes the acceptability of a product, and de-emphasizes the competitive context. Both methods have proven to work well in practice, but we are unaware of any study other than this one that has directly compared these two approaches.
Details of Experiment
We designed an Internet survey to compare the Pairwise and Single-Concept approach for computerized FP conjoint analysis. The subject for our study was credit cards, with the following attribute levels:
|Brand||Annual Fee||Interest Rate||Credit Limit|
|VISA||No annual fee||10% interest rate||$5,000 credit limit|
|Mastercard||$20 annual fee||14% interest rate||$2,000 credit limit|
|Discover||$40 annual fee||18% interest rate||$1,000 credit limit|
Respondents completed both Pairwise and Single-Concept conjoint questions (in rotated order). Additionally, holdout choice sets were administered both before and after the traditional conjoint questions. A total of 280 respondents completed the survey. Respondents self-selected themselves for the survey, which was launched from a hyperlink on Sawtooth Software's home page. This sampling strategy is admittedly poor had we been interested in collecting a representative sample. But the purpose of our study was not to achieve outwardly projectable results, but rather to compare the within-respondent reliability of alternative approaches to asking FP computerized conjoint.
Measuring the Reliability of Conjoint Methods
Reliability and validity are two terms often used to characterize response scales or measurement methods. Reliability refers to getting a consistent result in repeated trials. Validity refers to achieving an accurate or "true" prediction. Our study focuses only on issues of reliability.
Holdout conjoint (or choice) tasks are a common way to measure reliability in conjoint studies. We call them "holdout" tasks because we don't use them for estimating utilities. We use holdouts to check how well conjoint utilities can predict answers to observations not used in utility estimation. If we ask some of the holdout tasks twice (at different points in the interview), we also gain a measure test-retest reliability.
We included a total of three repeated holdout choice questions in our Internet survey. These displayed three credit cards and asked respondents to choose the one they would most likely sign up for. Respondents on average answered these holdouts the same way 83.0% of the time. This test-retest reliability is in line with those reported for other methodological studies we've seen that were not collected over the Internet. But one can argue that our respondents (marketing and market research professionals) were a well-educated and careful group. We cannot conclude from our study that Internet interviewing is as reliable as other methods of data collection.
We use the holdout choice tasks to test the reliability of our conjoint utilities. We would hope that the conjoint utilities can accurately predict answers to the holdout questions. We call the percent of correct predictions the holdout hit rate. Some have referred to hit rates as a validity measurement, but prediction of holdout concepts asked in the same conjoint interview probably say more about reliability than validity.
The holdout hit rates for the Pairwise and Single-Concept approach were 79.3% and 79.7%, respectively. This is a virtual tie; the difference is not statistically significant. These findings suggest that both methods perform equally well in predicting holdout choice sets.
In addition to completing conjoint tasks, we asked for qualitative evaluations of the Pairwise versus the Single-Concept approach. Respondents perceived that the Pairwise questions took only 13% longer than the Singles. We asked a battery of questions such as whether respondents felt the conjoint questions were enjoyable, easy, frustrating, or whether the questions asked about too many features at once. We found no significant differences between any of the qualitative dimensions for Pairwise vs. Single-Concept presentation.
Conjoint Importances and Utilities
We calculated attribute importances in the standard way, by percentaging the differences between the best and worst levels for each attribute. Conjoint importances describe how much impact each attribute has on the purchase decision, given the range of levels we specified for the attributes. Importances and utilities for Pairs vs. Single-Concept presentation were as follows:
|No annual fee||104||104||$20 annual fee *||44||34||$40 annual fee||0||0|
|10% interest rate||55||55||14% interest rate||30||30||18% interest rate||0||0|
|$5,000 credit limit||64||67||$2,000 credit limit||27||29||$1,000 credit limit||0||0|
The only significant difference for either conjoint importances or utilities between the two full-profile methods occurred in the utility for the middle level of annual fee ($20). In a presentation at our 1997 Sawtooth Software Conference, Joel Huber of Duke University argued that respondents may adopt different response strategies for sets of products versus Single-Concept presentation. He argued that when faced with comparisons, respondents may simplify the task by avoiding products with particularly bad levels of attributes. Annual fee was the most important attribute. The larger gap between the worst and middle level (44-0) for Pairs versus Single-Concept (34-0) is statistically significant at the 99% confidence level (t=3.93) and supports Huber's "undesirable levels avoidance" hypothesis.
Our data tell a comforting story, suggesting that both computerized Pairwise and Single-Concept FP ratings-based conjoint are equally reliable and result in the same importances and roughly the same utilities. Computerized FP conjoint seems to have worked well for a small design such as our credit card study. Given that the researcher has determined that the Internet is an appropriate vehicle for interviewing a given population, our findings suggest that FP conjoint can be successfully implemented via the Internet for a small study including four attributes.