Sawtooth Software: The Survey Software of Choice

History of ACA

This is an excerpt of an article by our chairman, Rich Johnson, published in the 2001 Sawtooth Software Conference Proceedings.

Although ACA (Adaptive Conjoint Analysis) makes use of ideas that originated much earlier, the direct thread of its history began in 1969. Like much work of the development in marketing research, it began in response to a client problem that couldn't be handled with current methodology.

The Problem

In the late '60s I was employed by Market Facts, Inc., and the client was in a durable goods business. In his company it was standard practice that whenever a new or modified product was seriously contemplated, a concept test had to be done. The client was responsible for carrying out concept tests, and he answered to a product manager who commissioned those tests.

The product manager would come to him and say: "We're going to put two handles on it, it's going to produce 20 units per minute, it will weigh 30 pounds, and be green." Our client would arrange to do a test of that concept, and a few weeks later come back with the results.

But before he could report them, the product manager would say: "Sorry we didn't have time to tell you about this, but instead of two handles it's going to have one and instead of 20 units per minute it will produce 22. Can you test that one in the next three weeks?" And so on.

Our client found that there was never time to do the required concept tests fast enough to affect the product design cycle. So he came to us with what he considered to be an urgent problem the need to find a way to test all future product modifications at once. He wanted to be able to tell the product manager, "Oh, you say it's going to have one handle, with 22 units per minute, weigh 30 pounds and be green? Well, the answer to that is 17 share points. Any other questions?"

Of course, today this is instantly recognizable as a conjoint analysis problem. But Green and Rao had not yet published their historic 1971 article, "Conjoint Measurement for Quantifying Judgmental Data" in JMR. Also, the actual problem was more difficult than indicated by the anecdote above, since the client actually had 28 product features rather than just four, with some having as many as 5 possible realizations.

Tradeoff Matrices

It seemed that one answer to this practical problem might lie in thinking about a product as being a collection of separate attributes, each with a specified level. This presented two immediate problems: a new method of questioning was needed to elicit information about values of attribute levels, and a new estimation procedure was needed for converting that information into "utilities."

Our solution came to be known as "Tradeoff Analysis." Although I wasn't yet aware of Luce and Tukey's work on Conjoint Measurement, that's what Tradeoff Analysis was.

To collect data, we presented respondents with a number of empty tables, each crossing the levels of two attributes, and asked respondents to rank the cells in each table in terms of their preference. We realized that not every pair of attributes could be compared, because that might lead to an enormous number of matrices to be ranked. After much consideration, we decided to pair each attribute with three others, which resulted in 42 matrices for the first study. One has to experience filling out a 5x5 tradeoff matrix before he can really understand what the respondent goes through. If the respondent must fill out 42 of them, one can only hope he remains at least partially conscious through the task.

Although we learned a lot about how to improve our technique for future applications, this first study, conducted in 1970, was a success. Similar approaches were used in hundreds of other projects during the next several years.

We soon discovered that respondents had great difficulty in carrying out the ranking task within tradeoff matrices. Though simple to describe, actual execution of the ranking task was beyond the capability of many respondents. We observed that many respondents simplified the task by what we called "patterned responses," which consisted of ranking the rows within the columns, or the columns within the rows, thus avoiding the more subtle within-attribute tradeoffs we were seeking. This difficulty appeared to be so severe that it motivated the next step in the evolution which resulted in ACA.

Pairwise Tradeoff Analysis

Ranking cells in a matrix can be difficult for respondents, but answering simple pairwise tradeoff questions is much easier. For example, we could ask whether a respondent would prefer a $1,000 laptop weighing 7 pounds or a $2,000 laptop weighing 3 pounds.

Consider two attributes like Price and Weight, each with three levels. In a 3x3 tradeoff matrix there are 9 possible combinations of levels, or cells. We could conceivably ask as many as 36 different pairwise preference questions about those 9 cells, taken two at a time.

However, if we can assume we know the order of preference for levels within each attribute, as we probably can for price and weight, we can avoid asking many of those questions.

By the mid '70s computer technology had advanced sufficiently that it became feasible to do computer-assisted Tradeoff Analysis using pairwise questioning. A large project was undertaken for a U.S. military service branch to study various recruiting incentives. The respondents were young males who had graduated from high school but not college. A large number of possible incentives were to be studied, and we were concerned that the required number of tradeoff matrices would strain the capabilities of our respondents.

My associate at Market Facts, Frank Goode, studied strategies for asking pairwise questions that would be maximally informative, and wrote a question-selecting program that could be used to administer a pairwise tradeoff interview. We purchased what was then described as a "minicomputer," which meant that it filled only a small room rather than a large one. Respondents sat at CRT terminals at interviewing sites around the U.S., connected to a central computer by phone lines. Each respondent was first asked for within-attribute preferences, permitting all attributes subsequently to be regarded as ordered, and then he was asked a series of intelligently chosen pairwise tradeoff questions.

We found that questioning format to be dramatically easier for respondents than filling out tradeoff matrices. The data turned out to be of high quality and the study was judged a complete success. That study marked the beginning of the end for the tradeoff matrix.

Microcomputer Interviewing

By the late '70s the first microcomputers were becoming available, and it seemed that computer-assisted interviewing might finally become cost-effective. We purchased an Apple II and I began trying to produce software for a practical and effective computer-assisted tradeoff interview. My initial approach differed from the previous one in several ways:

  • First, it made more sense to choose questions that would reduce uncertainty in the part-worths being estimated, rather than choosing questions to predict how respondents might fill out tradeoff matrices. This was a truly liberating realization, which greatly simplified the whole approach.
  • Second, it made sense to update the estimates of part-worths after each answer. Each update took a second or two, but respondents appeared to appreciate the way the computer homed in on their values.
  • Third, a "front-end" section was added to the interview, during which respondents chose subsets of attributes that were most salient to them personally, as well as indicating the relative importance of each attribute. We used this information to reduce the number of attribute levels to be taken into the paired-comparison section of the interview, as well as to generate an initial set of self-explicated part-worths which could be used to start the paired-comparison section of the interview.
  • Finally, those paired-comparison questions were asked using a graded scale, from "strongly prefer left" to "strongly prefer right." Initially we had used only binary answers, but found additional information could be captured by the intensity scale.

Small computers were still rare, so the experience of being interviewed had considerable entertainment value. We found that an effective way to sell research projects was to pre-program a conjoint interview for a prospective client's product category and take an Apple with us on the sales call. Once a marketing executive had taken the interview and had seen his own part-worths as revealed by the computer, he often couldn't wait to use the same technology in a project.

We purchased several dozen Apple computers, and began a fascinating adventure of using them all over the world, in many languages and in product categories of almost every description. Those early Apples were much less reliable than current-day computers. I could talk for hours about difficulties we encountered, but the Apples worked well enough to provide a substantial advance in the quality of data we collected.

ACA

In 1982 I retired as a marketing research practitioner, moved to Sun Valley, Idaho, and soon started Sawtooth Software, Inc. I had been fascinated by the application of small computers in the collection and analysis of marketing research data, and was now able to concentrate on that activity.

IBM had introduced their first PC in the early '80s, and it seemed clear that the "IBM-compatible" standard would become dominant, so we moved from the Apple II platform to the IBM DOS operating system.

ACA was one of Sawtooth Software's first products. The first version of ACA offered comparatively few options. Our main thought in designing it was to maximize the likelihood of useful results, which meant minimizing the number of ways users could go wrong. I think we were generally successful in that. ACA had the benefit of being developed over a period of several years, during which its predecessors were refined in dozens of actual commercial projects. Although there were some "ad hoc" aspects of the software, I think it is fair to say that "it worked."

I have been involved in one way or another with ACA for more than 30 years. During that time it has evolved from an interesting innovation to a popular tool used world-wide, and has been accepted by many organizations as a "gold standard." As I enter retirement, others are carrying on the tradition, and I believe you will see continuing developments to ACA that will further improve its usefulness.