Have an idea?

Visit Sawtooth Software Feedback to share your ideas on how we can improve our products.

Attribute level selection in ACBC if all levels are ordinally-scaled

Dear All,

I plan to launch an ACBC experiment to uncover drivers of hiring decisions. Participants have to select their preferred job candidates for a fictional analyst position based on six skills attributes, e.g., analytical skill, industry knowledge, interpersonal skills. As my sample is highly selective which leads to a rather small sample size (I can only survey experienced managers with hiring expertise contained in my university’s database) I have identified ACBC as method of choice.

However, I have two questions regarding the BYO task and selection of appropriate levels:

1)    As higher skills are obviously better than lower skills, I fear that all respondents will select the highest level for all attributes in the BYO task. All attribute levels are ordinally-scaled from best to worst (very good, good, reasonable). Would that be problematic for the screening and choice tournament? I assume that the lower levels for any attribute won’t be used much across all participants and experiment phases. Is there some sort of loss in information richness in the data if that happens?

2)    Right now I plan to use the three levels “very good”, “good” and "reasonable” for 4 of 6 attributes. I fear that any level lower than “reasonable” is unacceptable to any respondent (who wants to recruit an analyst with “poor” analytical skills?). However, I am also concerned that respondents will be biased to select “reasonable”, because it is the lowest level. Does anybody have experience with selecting appropriate levels for a comparable setting? I also considered using the levels “above average”, “average”, or “below average” (+ maybe exceptional). It would be very helpful to gather your opinion on that in case somebody has experience or a good feeling for my research setting.

As I understand that (A)CBC analysis was developed to study consumer preferences for products and services where most of the time attributes are nominal-scaled (Favorite restaurant: Italian, Chinese or Mexican?) I am just insecure about the specifics of my study setting and would warmly welcome any advices in that regard.

Thank you very much in advance and kind regards,
asked Mar 23, 2016 by floethy86 (140 points)

1 Answer

0 votes
For attributes where you know that most everybody would pick the best level in the BYO, it can make sense to drop the attribute from the BYO.  You do that in the ACBC interface quite easily from the Attributes tab (by clicking "No" for an attribute in the "Include in BYO" column).  When you do this, the BYO question is not asked for that attribute and the experimental design algorithm samples evenly across the levels of this attribute.

Regarding the use of somewhat non-concrete terms like "above average" "average" and "below average"...we'd prefer when using conjoint analysis to be describing attributes that are defined much more tightly.  What does it really mean to be "below average" or "above average"?  It seems to make the measurement less concrete and that you will get some relative utility values for these different attributes that may be useful for segmenting the respondents and making some inferences, but I worry about the actionability of the results.
answered Mar 23, 2016 by Bryan Orme Platinum Sawtooth Software, Inc. (131,990 points)
Dear Bryan,

thank you very much for the swift reply.

I understand and appreciate your points, but fear that they are difficult to apply in my research setting:

All attributes represent skills or knowledge items and feature identical scales (very good to reasonable). Theoretically, I had to exclude all attributes from the BYO task, i.e. skip the BYO completely (what I also considered before). However, there is an outside chance, that maybe some respondents do not want a genius in their department that frustrates anybody else or switches jobs quickly due to over-performance. Thus, I would prefer to have an BYO task.

Similarly, I see no better way than using un-concrete levels for my research subjects: Skills or knowledge represent latent variables which are difficult - if not impossible - to measure precisely. Therefore, I am afraid that I have to work with scales such as “good” or “average”. I probably should include an explanation page in the experiment that defines the levels precisely to reach a mutul understanding about the actual meaning of the levels. But as mentioned before, I am open to any other suggestions regarding level selection.

Kind regards,
Regular conjoint analysis works fine without a BYO to ascertain peoples' preferences for levels (or severe disdain for particular levels) so I do not see a problem with skipping the entire BYO section in ACBC for your purposes.  The other sections of the ACBC survey (screener, must-haves and must-avoids, and choice tournament) will capture that information.
Dear Bryan & forum members,

based on your comments and consulting a marketing professor familiar with ACBC we decided to skip the BYO section indeed - thank you very much!

However, two quick question arose now which are related to the implication for the experiment design in case the BYO section is skipped: Is the design recommendations table in the “Design Tab (ACBC)” in the help file still valid?

1. When I test the survey design with 5 robotic respondents the minimum times a level is shown is now 8. I followed your design recommendations for my setting (6 attributes with 3 levels each), meaning having 7 screening tasks à 4 concepts per tasks and a max. # of concepts brought into the tournament = 14 à 3 concepts per task. Can I reduced the number of screening tasks or the choice tournaments now, because initially you declared 3 as the minimum number each level should appear?

2. When doing a pre-test run myself I haven't found an answer pattern that triggers the must-have question. Is that normal or problematic? Also, usually only 0-2 unacceptable questions appear in my screening phase, although the number of unacceptable and must-haves is set to 4 & 3, respectively.

Thank you in advanced for any help – as mentioned, in general I am interested in what I have to consider for the design of the screening tasks and choice tournament if the BYO section is skipped.

Best regards,
That's  good question about what happens with ACBC number of questions recommendations when you skip the BYO (and then ACBC samples evenly among all levels in the experimental design rather than oversampling on the BYO preferred levels).  I haven't put much thought or any research to that.

My feeling is that you shouldn't use the "minimum of 2 and preferably 3x per non-BYO level" rule when you skip the BYO section.  My guess is each level should show at least 6 times if completely skipping the BYO.  This follows a recommendation that a good Bayesian statistician who used to work for us often recommended regarding what is needed with CBC designs to get good estimation at the individual level.

To trigger the must-have questions, always reject all but one level of an attribute in the Screener section.  The, ACBC will think the level that was sometimes or always picked as "yes, a possibility" is a must-have level.

Once ACBC rules out any remaining levels as must-have or must-avoid, then it skips any remaining must-have or unacceptables questions.