How is Adaptive Choice-Based Conjoint data stored in the database?
Under the help menu of Lighthouse Studio, there is a ACBC Sample Study which contains sample data. Let's look at this survey to see what's going on behind the scenes. This study has 10 attributes with 3 to 4 levels per attribute, as follows:
- 14 inch screen, 5 pounds
- 15 inch screen, 6 pounds
- 17 inch screen, 8 pounds
- Intel Core 2 Duo T5600 (1.86GHz)
- Intel Core 2 Duo T7200 (2.00GHz)
- Intel Core 2 Duo T7400 (2.16GHz)
- Intel Core 2 Duo T7600 (2.33GHz)
- Operating System:
- Vista Home Basic
- Vista Home Premium
- Vista Ultimate
- 512 MB
- 1 GB
- 2 GB
- 4 GB
- Hard Drive:
- 80 GB
- 100 GB
- 120 GB
- 160 GB
- Video Card:
- Integrated video, shares computer memory
- 128MB Video card, adequate for most use
- 256MB Video card for high-speed gaming
- 3 hour
- 4 hour
- 6 hour
- Productivity Software:
- Microsoft Works
- Microsoft Office Basic (Word, Excel, Outlook)
- Microsoft Office Small Business (Basic + PowerPoint, Publisher)
- Microsoft Office Professional (Small Bus + Access database)
- Price: (summed pricing attribute)
Here is some additional information about the design. It featured 8 screening tasks, with five concepts per screening task. There was a value of 2 under the "minimum attributes to vary from BYO Selectios" of 2, and a value of 4 under maximum value. It used a "Mixed Approach" BYO-Product Modification Strategy. It had 4 unacceptables, 5 must haves, with a maximum of 20 product concepts to be brought into the choice tournament. It displayed three concepts per choice task and had zero calibration concepts. The BYO was included in the choice tournament.
Now, let's look part of the data record for respondent number #15005:
sys_RespNum is the the unique identifier assigned to each data record as it is created on the server. The
sys_SequentialRespNum is the sequential number given to data as it is imported into Lighthouse Studio.
What this means is that the third level was chosen for the first BYO attribute. In this case the attribute label was
size and the level label was
17 inch screen, 8 pounds.
Next we see the Screening Section. In this section respondents are not asked to make final choices, but rather to build a consideration set of product concepts by indicating whether each one is "a possibility" or "not a possibility." According to the design settings, each respondent will see 8 screening tasks with five concepts per task. Let's look at the first task.
Notice the two values. The first parameter contains a 1 if it was marked as a possibility and a zero if it wasn't. The second parameter is the concept number. So for concepts 1 through 5, concepts 1 and 4 were chosen as possibilities.
In the second screening task, the respondent saw concepts 6 through 10 and liked 6 and 9. In the thrid screening task, the respondent saw concepts 11 through 15 and chose 11 and 14. Hmm. This is the third time in a row this respondent chose the first and fourth items. Could this be a lazy "straight-liner"? Maybe. Later, you may want to look at this respondent's answers and see if you see similiar patterns. If so, you may want to discard or discount this respondent's answers.
Next we see
laptop_MustHave1, which is the first of five possible "Must Have" questions. This question only appears if the respondent has indicated that certain attribute levels (or a range of levels for ordered attributes) are possibilities. In this case, nothing has been marked as a "Must Have."
Next we saw another screening task. This time, the respondent chose the first, second, and fifth options which were concepts 16, 17, and 20.
And here is our second Must Have task. However, notice that there is nothing recorded. This is a
null value, which means this question wasn't asked because the software determined that the items shown in the last task did not conform to the must have logic. In other words, nothing changed since the last must have question, so it was skipped entirely.
This was our first Unacceptable task. In this case, the respondent was presented with a subset of characterisitics. The 9th characteristic was chosen as unacceptable, so any existing concept that contained that element is going to be sricken from the consideration set.
And here was our fifth screening task. The first and fifth options were chosen, concepts 21 and 25.
There was a third Must Have task, but it was skipped again because nothing seemed to change. It was followed with the second Unacceptable task. This time, the fifth element was marked as Unacceptable.
And this took us to the sixth Screening Task. The respondent chose the first and third items, which were items 26 and 28.
Choice Tournament Section
|sys_ACBC_laptop_BYO_1_prices||[750, 750, 1000]|
|sys_ACBC_laptop_BYO_2_prices||[0, 0, 50, 100]|
|sys_ACBC_laptop_BYO_3_prices||[0, 100, 300, 550]|
|sys_ACBC_laptop_BYO_4_prices||[0, 50, 100]|
|sys_ACBC_laptop_BYO_5_prices||[0, 100, 250, 400]|
|sys_ACBC_laptop_BYO_6_prices||[0, 50, 100, 150]|
|sys_ACBC_laptop_BYO_7_prices||[0, 50, 200]|
|sys_ACBC_laptop_BYO_8_prices||[0, 100, 200]|
|sys_ACBC_laptop_BYO_9_prices||[0, 150, 250, 300]|
Other ACBC Data
|sys_StartTime||13 Feb 2007 - 19:35:27 MST|
|sys_EndTime||13 Feb 2007 - 19:47:55 MST|
|sys_ElapsedTime||0h 12m 28s|
The last three fields contain the time data:
sys_StartTime is the time when the respondent's data record was created,
sys_EndTime is the last time survey data was submitted for this respondent, and
sys_ElapsedTime is the total amount of time the respondent has spent in the survey.