Sawtooth Software: The Survey Software of Choice

Pre-Conference Workshops (Monday, Tuesday)

CBC Software Workshop

Monday and Tuesday (8:00 AM - 5:00 PM)

Brian McEwan, Sawtooth Software, Inc.

Megan Peitz, Sawtooth Software, Inc.

If you are relatively new to choice-based conjoint (CBC) or just getting started, join us for two days of hands-on practice with Lighthouse Studio, our flagship survey platform that can include the popular CBC component and market simulator. We’ll cover the main aspects of designing, programming, and analyzing CBC studies.  You will have an opportunity to program CBC questionnaires individually as well as analyze data from a real CBC study in a team-oriented case study session.  We’ll provide coverage of counting analysis, logit, latent class, and HB.  The instructors will share best practices, pitfalls to avoid, and experiences based on many years of technical support and consulting.

Attendees receive an evaluation copy of the software that they may use for 90 days (for non-commercial studies and evaluation purposes only). Attendees also receive a free copy of the “Getting Started with Conjoint Analysis” book.  Limited to 25 participants.

Turbo Choice Modeling

(8:00 AM - 5:00 PM)

Keith Chrzan (Lead Paenlist and Moderator), Sawtooth Software, Inc.

Joel Huber, Duke University

Kevin Lattery, SKIM Group

Peter Kurz, Kantar TNS

Scott Ferguson, NC State University

Bryan Orme, Sawtooth Software, Inc.

Jane Tang, MARU/VCR&C

Sawtooth Software has invited some of the brightest conjoint and choice modeling researchers to join us for two days of instruction and collaboration regarding the coolest things happening in the choice world. While most topics will mention Sawtooth Software’s CBC- and MBC-related programs, the principles generalize to any other software for choice modeling. The sessions will emphasize practical issues and practical solutions more than theoretical academic research.

Key to the success and value of this event is the core group of researchers whom we have invited to attend and participate as panelists. These researchers have contributed to past Sawtooth Software events and are leading experts in choice modeling. Most importantly they are plain-spoken and insightful about choice methodologies. With this experienced group of choice modelers, you can bet that the discussion will be lively and instructive. Each of the panelists will be giving presentations and will also participate in the panel discussion.

Topics include: what we’ve learned from eye-tracking in CBC, system 1 vs. 2 thinking, convergence in HB, price optimization with Nash Equilibrium, optimization algorithms, upper-level model and context effects, bandit MaxDiff, searching for interaction terms with HB analysis, dual-response price, sparse/express MaxDiff, making CBC more engaging. (Note: these topics are covered over March 5-6; you must attend both days to receive the full training.)

Advanced Lighthouse Studio Workshop

Tuesday (8:00 AM - 5:00 PM)

Justin Luster, Sawtooth Software, Inc.

David Squire, Sawtooth Software, Inc.

Lighthouse Studio is a powerful application that has been designed to be very flexible. Custom code can be added to modify the appearance and functionality of your surveys allowing you to do amazing things. In this workshop we will learn about how to incorporate the following into your Lighthouse surveys:

  • HTML
  • CSS
  • JavaScript
  • jQuery
  • Perl

Learning a little bit about these technologies will greatly enhance your ability to create surveys that your customers will love. This workshop will be very hands on. You will be learning about these scripting languages and then applying them to a Lighthouse Studio survey. We will be on hand to help you every step of the way.

Attendees must bring a laptop PC with Lighthouse Studio installed (a demonstration version will be given to you in advance for the purposes of classroom instruction).

Practical Tips and Tricks on Conjoint and MaxDiff

Tuesday (8:00 AM - 12:00 PM)

Jeroen Hardon, SKIM Group

In this practical session, you will partly be in charge of what we will teach. We will provide you with a large set of practical topics, of which you decide which will be most relevant. If you struggle with questions like:

  • Which conjoint method is the most appropriate for the business question?
  • Which kind of None should I use?
  • First choice of share of preference?
  • What is the difference between preference share, market share and volume share?
  • How many concepts do I show?
  • Should I line-price my products?
  • How many products can I include in my study?
  • I have too many items to do a regular MaxDiff, now what?

The trainer Jeroen Hardon has been involved in over a thousand conjoint and MaxDiff studies and developed a good feeling how to deal with these kinds of challenges. Jeroen is a practitioner and is not afraid to "bend the academic rules" in order to answer the business question of your clients in the best way.

Please prepare yourself for a lively and interactive session! In case you have specific questions / topics you want to include during this session, please let Jeroen know at least one week in advance so he can prepare (This email address is being protected from spambots. You need JavaScript enabled to view it.).

Is My Advertising Working?

Tuesday (1:00 PM - 5:00 PM)

Elea Feit, Drexel University

Determining which ads work is an important component of marketing analytics, yet there are numerous different methods which may give different answers and lead to confusion among advertisers and analysts. During this workshop, you will get a hands-on introduction to five different approaches to determining advertising response:

  • Attribution rules such as last-click and first-click
  • Holdout experiments
  • Propensity scoring
  • Marketing mix modeling and other time series methods
  • Algorithmic attribution and other model-based approaches

Unlike most other tutorials that present just one of these methods in isolation, we will apply all five methods using the same data set. We will go from raw advertising data all the way to presentable findings. By working through these examples, you will develop a better understanding of how each method works, as well as the potential pitfalls of each method. The examples will be worked in the R statistical language and students who know R are encouraged to follow along; but the workshop is easy to follow even if you don't know R. Data and code files will be available to workshop participants.

What if I don’t know R? Don’t worry! You don’t need to know R. All of the analysis output is in the workshop slides. You can ignore the R syntax and focus on the data that go into the analysis, the output of the analysis, and how we interpret it. No need to bring a laptop; but you can, if you want. You can also try to replicate the analysis with your favorite statistical software.Please prepare yourself for a lively and interactive session! In case you have specific questions / topics you want to include during this session, please let Jeroen know at least one week in advance so he can prepare (This email address is being protected from spambots. You need JavaScript enabled to view it.).

What if I know R or am learning R? You can use this workshop to develop your R skills, so come to class with a laptop with R and RStudio installed (installation instructions below.) We will provide a code file on the day of the workshop, so it will be easy to keep up, even if you are unfamiliar with some of the R syntax.

General Session Presentations (Wednesday—Friday)

Session 1

Conference Registration

Wednesday (7:00 AM - 5:00 PM)

Breakfast

Wednesday (7:00 AM - 8:25 AM)

Welcoming Remarks

Wednesday (8:25 AM - 8:30 AM)

Bryan Orme, Sawtooth Software, Conference Moderator

Constructed, Augmented MaxDiff: Two Case Studies from Google Cloud

Wednesday (8:30 AM - 9:15 AM)

Eric Bahna, Google Cloud

Chris Chapman, Google Cloud

Google Cloud needed to prioritization customer needs across many product scenarios, but faced a limitation of common choice model surveys: different respondents needed to prioritize different sets of scenarios. We discuss how we solved this with constructed & augmented MaxDiff, and share survey design tips and R code for the method.

Shapley Values: Easy, Useful and Intuitive

Wednesday (9:15 AM - 10:00 AM)

David Lyon, Aurora Market Modeling, LLC

Get a practical, intuitive understanding of how and why Shapley Values should be widely used to summarize any analyses of combinations of items (variety assortments, feature bundles, ad claims, etc.). Computing practicalities will also be covered, including super-fast and exact methods for TURF and some TURF-like problems, even huge ones, and good approximations for large problems of other types.

Refreshment Break

Wednesday (10:00 AM - 10:30 AM)

Session 2

FDA Seeks Patient Preference Information to Enhance their Benefit-Risk Assessments

Wednesday (10:30 AM - 11:15 AM)

Leslie Wilson, University of California San Francisco

Fatema Turkistani, University of California San Francisco

The FDA is seeking patient preference studies that can serve as examples to further advance their goal of including the patient voice in regulatory decision for both drugs and devices. We will describe previous examples and present the process for development of a discrete choice measure for FDA use. Preliminary results of our pilot study demonstrate that video is preferred by patients taking these surveys and they have a preference for high technology in prosthetic devices.

A Direct Comparison of Discrete Choice and Allocation Conjoint Methodologies in the Healthcare Domain

Wednesday (11:15 AM - 12:00 PM)

James Pitcher, GfK

Tatiana Koudinova, GfK

Daniel Rosen, GfK

Patient Based Discrete Choice (PBC) and Allocation Based Conjoint (ABC) are both commonly used to estimate new product preference shares in the healthcare space. For the first time, this research directly compares the accuracy of the two methods, their characteristic similarities and differences, as well as their ease of implementation and respondent-friendliness. Our research revealed significant differences between the two models both in terms of modelled preference share estimates and directly reported preference share.

Lunch

Wednesday (12:00 PM - 1:30 PM)

Session 3

A Meta-Analysis on Three Distinct Methods Used in Measuring Variability of Utilities and Preference Shares within the Hierarchical Bayesian Model

Wednesday (1:30 AM - 2:15 AM)

Jacob Nelson, SSI

Edward "Paul" Johnson, SSI

Brent Fuller, SSI

There are several ways to assess variability in Hierarchical Bayes modeling. We discuss three methods and apply each method to actual HB models in the marketing research field across different methodologies and model characteristics. We identify modeling situations where these three methods differ.

Preference Based Conjoint: Can It Be Used to Model Markets with Many Dozens of Products

Wednesday (2:15 PM - 3:00 PM)

Jeroen Hardon, SKIM Group

Marco Hoogerbrugge, SKIM Group

Conjoint analysis is often used for complex markets, with dozens of products in the market. Ideally we would replicate the existing complexity of the market as good as we can in the design of the conjoint survey but that is not always feasible. The key question in this presentation is to check if a different way of constructing the statistical design can improve the prediction for simulators with many dozens of products.

Refreshment Break

Wednesday (3:00 PM - 3:30 PM)

Session 4

Development of an Adaptive Typing Tool from MaxDiff Response Data

Wednesday (3:30 PM - 4:15 PM)

Jay Magidson, Statistical Innovations, Inc.

John P. Madura, University of Connecticut

A new adaptive approach for developing MaxDiff typing tools achieves high accuracy with an average of only 8 binary items! Reduction to 7 items can be achieved if trichotomous items are included in the mix. This method can be implemented with commercial software such as Latent GOLD® and CHAID.

Extending the Ensemble: An Alternative “Neutral” Approach to Segmentation

Wednesday (4:15 PM - 5:00 PM)

Curtis Frazier, Radius Global Market Research

Ana Yanes, Radius Global Market Research

Michael Patterson, Radius Global Market Research

Cluster Ensemble models have provided a great deal of power to analysts by estimating, and combining, models using different algorithms and different numbers of clusters. We propose extending this concept by incorporating an additional variable – the inputs themselves. By varying the inputs, we can mitigate the risk of sub-optimal solutions driven by our input selection. We will compare our ability to identify known segments using existing approaches to our extended ensemble approach.

General Session Ends

Product Optimization Using Choice Simulator (clinic)

Wednesday (5:15 PM - 6:15 PM)

Walt Williams, Sawtooth Software, Inc.

Teaching Conjoint Analysis at the University (clinic)

Wednesday (5:15 PM - 6:15 PM)

Justin Luster, Sawtooth Software, Inc.

Clay Voorhees, Michigan State University

Reception

Wednesday (6:00 PM - 7:30 PM)

Session 5

Conference Registration

Thursday (7:00 AM - 5:00 PM)

Breakfast

Thursday (7:00 AM - 8:25 AM)

Synergistic Bandit Choice (SBC) Design for Choice-Based Conjoint

Thursday (8:30 AM - 9:15 AM)

Bryan Orme, Sawtooth Software

Some CBC studies involve complex interactions among three or more style and color attributes, such as when designing packages for consumer goods. Traditional CBC designs may be suboptimal in these cases. We demonstrate a multi-stage bandit design that uses counting analysis to identify synergies beyond just first-order interactions. At each stage, most frequently chosen combinations of attribute levels are oversampled for evaluation by later respondents. In a pilot study involving complex interaction effects, our approach performed significantly better than traditional CBC.

Optimal Design in Discrete Attribute Spaces by Sequential Experiments

Thursday (9:15 AM - 10:00 AM)

Mingyu Joo, The Ohio State University

Michael L. Thompson, The Procter and Gamble Company

and Greg Allenby, The Ohio State University

The identification of the optimal visual design of brand logos, products or packaging is challenged when attributes and their discrete levels interact. We propose an experimental criterion for sequentially searching for the most preferred design concept, and incorporate a stochastic search variable selection method to selectively estimate relevant interactions among the attributes. A validation experiment confirms that our proposed method leads to improved design concepts in a high-dimensional space compared to alternative methods.

Refreshment Break

Thursday (10:00 AM - 10:30 AM)

Session 6

Non-Negative Matrix Factorization: Gaining Insights via Simultaneous Segmentation & Factoring

Thursday (10:30 AM - 11:15 AM)

Michael Patterson, Radius Global Market Research

Jackie Guthart, Radius Global Market Research

Curtis Frazier, Radius Global Market Research

Non-Negative Matrix Factorization (NMF) is a relatively new technique that allows for the simultaneous segmentation of individuals and “factoring” of variables. This presentation will introduce NMF and compare its performance relative to standard segmentation approaches (K-means, LCA) using both simulated data along with data from an actual study.

Variable Selection for MBC Cross-Price Effects

Thursday (11:15 AM - 12:00 PM)

Katrin Dippold-Tausendpfund, GfK

Christian Neuerburg, GfK

In MBC, cross-price effects need to be selected carefully in order not to overfit the models or have simulation results distorted by “noisy” parameters. We investigate different approaches that support the selection of cross-price effects and compare their performance based on synthetic datasets under varying data conditions.

Lunch

Thursday (12:00 PM - 1:30 PM)

Session 7

Clever Randomization and Ensembling Strategies for Accommodating Multiple Data Pathologies in Conjoint Studies

Thursday (1:30 PM - 2:15 PM)

Jeff Dotson, Brigham Young University

Roger Bailey, The Ohio State University

Marc Dotson, Brigham Young University

Respondent behavior in conjoint studies often deviates from the assumptions of random utility theory. We refer to deviations from normative choice behavior as data pathologies. We draw on innovations in machine learning to develop a practical approach that relies on (clever) randomization strategies and ensembling to simultaneously accommodate multiple data pathologies in a single model. We provide tips and tricks on how to implement this approach in practice.

Tools for Dealing with Correlated Alternatives

Thursday (2:15 PM - 3:00 PM)

Jeroen Hardon, SKIM Group

Kevin Lattery, SKIM Group

Kees van der Wagt, SKIM Group

Correlated alternatives violate our standard conjoint modeling assumptions (IIA). While respondent level utilities help, sometimes that is not enough. We describe and compare several tools for dealing with correlated alternatives. These include full blown nested logit, error components logit, and post-hoc simulator adjustments.

Refreshment Break

Thursday (3:00 PM - 3:30 PM)

Session 8

Predictive Analytics with Revealed Preference-Stated Preference Models

Thursday (3:30 PM - 4:15 PM)

Peter Kurz, Kantar TNS

Stefan Binner, bms marketing research + strategy

The combination of Price Only Discrete Choice Models and time series data aka RPSP models (revealed preference - stated preference models) are still a challenge in view to data availability and computation time. However, they can provide significant benefit to predictive pricing scenarios in future markets.

The Perils of Ignoring Uncertainty in Market Simulators and Product Line Optimization

Thursday (4:15 PM - 5:00 PM)

Scott Ferguson, NC State University

Ignoring parameter and product attribute uncertainty when optimizing a product line can lead to disastrous market performance. Examples will be provided that illustrate the perils of ignoring these uncertainties, and concepts associated with reliability and robustness are presented to formulate a more rigorous uncertainty-based product line optimization problem statement.

General Session Ends

Mobile CBC: Improvements in Lighthouse Studio (clinic)

Thursday (5:15 PM - 6:15 PM)

Megan Peitz, Sawtooth Software, Inc.

Justin Luster, Sawtooth Software, Inc.

Situational Choice Experiments (clinic)

Thursday (5:15 PM - 6:15 PM)

Keith Chrzan, Sawtooth Software, Inc.

Reception

Thursday (6:00 PM - 7:30 PM)

Session 9

Conference Registration

Friday (7:00 AM - 12:00 PM)

Breakfast

Friday (7:00 AM - 8:25 AM)

Properties of Direct Utility Models for Volumetric Conjoint

Friday (8:30 AM - 9:15 AM)

Jake Lee, Quantum Strategy, Inc

Direct Utility Models for Volumetric Conjoint have gotten some academic attention in the last few years, but practitioners have been to slow to adopt their use. This paper will give an overview of the model with its benefits and challenges. Special attention will be given to practical concerns like exercise design, experimental design, and simulation.

A Comparison of Volumetric Models

Friday (9:15 AM - 10:00 AM)

Thomas Eagle, Eagle Analytics of California, Inc.

Three different volumetric models are compared based on holdout task validation and managerial implications of the patterns of substitution given selected changes in prediction scenarios. The volumetric models compared are the HB joint discrete/continuous model; the Howell-Allenby volumetric model; and the latent class Poisson model with cross effects.

Refreshment Break

Friday (10:00 AM - 10:30 AM)

Session 10

Direct Estimation of Key Drivers from a Fitted Bayesian Network

Friday (10:30 AM - 11:05 AM)

Benjamin Cortese , KS&R

Melissa Jusianiec, KS&R

A new driver analysis technique, Bayesian network key driver analysis (BNKDA), is proposed, to calculate driver scores directly from a fitted Bayesian network. The performance is analyzed through simulation studies and comparisons to other driver analysis methods. Findings suggest BNKDA is a viable addition to the driver analysis toolbox.

Product Relevance and Non-Compensatory Choice

Friday (11:05 AM - 11:40 AM)

Marc Dotson, Brigham Young University

Greg Allenby, The Ohio State University

Roger Bailey, The Ohio State University

We propose a non-compensatory choice model that combines choice information with auxiliary data to account for different kinds of screening rules. Specifically, we model brand and the remaining attributes separately to account for the sub-compensatory process of assessing product relevance.

Best Paper Ballot Collection

Friday (11:40 AM - 11:45 AM)

Closing Remarks and Best Paper Award

Friday (11:55 AM - 12:05 PM)

Bryan Orme, Sawtooth Software

Conference Adjourned

Friday (12:05 PM)

Optional Break-out Sessions (Wednesday—Friday)

Those who have registered for the main conference sessions on March 7-9 may also attend any parallel break-out sessions.

Wednesday, March 7: Break-Out Room #1

Estimating Aggregate Random Coefficients Logit Models Using Bayesian Techniques in Stan

Wednesday (10:30 AM - 12:00 PM)

James Savage, Lendable

When individual choice-level data is not available to researchers, it is common to estimate the random coefficients logit model. This workhorse estimation technique is powerful, yet can be unreliable. In particular, it does not include a model of measurement error (which can be large in smaller or more fragmented markets), the fitted parameters can vary widely between optimization techniques, and inference techniques typically resort to unreasonable appeals to large-sample sample properties of the estimator. We show a straightforward method to estimate the structural parameters of the aggregate random coefficients logit model by proposing the full generative model—including measurement error.

An exciting extension of the model allows researchers performing conjoint analysis on survey data —which is plagued by selection biases and measures only stated preferences—to estimate their models jointly with the aggregate random coefficients logit model. By using both survey and sales data, the estimates from the conjoint model must “make sense” of aggregate sales data. This ameliorates the biases from selection-into-survey, and stated preferences. Additionally, using this method frees the researcher from making ad-hoc adjustments to the conjoint estimates in order to match market shares.

We illustrate several recent applications of the approach, including portfolio price optimization, automatic product feature suggestions, and producing Bayesian estimates of cannibalization.

“Extreme” Market Research: Scalable High-Performance Prediction

Wednesday (1:30 PM - 3:00 PM)

Ewa Nowakowska, GfK

Joseph Retzer, ACT Market Research Solutions

High dimensional data analysis for predictive model development is both challenging and valuable. Various predictive models, e.g. CART, Random Forest analysis, bagging, neural networks, support vector machines, etc., have been shown to provide useful models, under various circumstances, for out-of-sample prediction. Most of the afore-mentioned methods however can be rendered ineffective when working with very large data sets. In other words, these methods do not “scale” well when applied to big data.

One approach to addressing this issue is through the application of “XGBoost” (eXtreme Gradient Boosting), developed by Tianqi Chen and Carlos Guestrin of the University of Washington. XGBoost, an extension of gradient boosting, and provides an efficient and scalable implementation of the gradient boosting algorithm. XGBoost shows great promise as demonstrated by the fact that it has been adopted by more than half of the winning solutions in machine learning challenges hosted at the online Kaggle competition.

This session will begin with a review of recursive partitioning techniques such as Chaid and CART along with their implementation in R. Next, an intuitive overview of ensemble based modeling methods including Bagging, Random Forest Analysis and Gradient Boosted Decision Trees will be discussed. The implementation of these models in R will also be demonstrated.

The session culminates in an overview of extreme gradient boosting (XGBoost). We will demonstrate its implementation in R through an application to anonymized consumer-based data. XGBoost will be shown to provide comparatively high predictive performance while insuring scalability of the model.

Modeling the Dynamics of Consumer Preferences: The Challenge of Revealed Preference Data

Wednesday (3:30 PM - 5:00 PM)

Jakub Glinka, GfK

Ewa Nowakowska, GfK

Ever increasing amounts of data are being collected on consumer choices in the market place. This data is not only larger in volume but also different in nature from stated preference data traditionally leveraged in market research. In this session we will discuss challenges posed by revealed preference data and how they may be addressed.

We will walk the audience through solutions developed in response to challenges faced during the R&D process of developing a data product aimed at optimizing launch prices & distribution for new products. Our model is particularly well suited to handle large volumes of data collected across many reporting units over long periods of time.

Some of the challenges discussed in our presentation will include:

  • The imputation of missing product feature data
  • Aggregation: Revealed preference data is commonly collected in a highly granular form. The data therefore needs to be aggregated before modeling in order to integrate out noise and remove sparsity.
  • Modeling: Finally, the modeling of consumer preferences using Aggregate Multinomial Logit, with a sparse prior to account for the large number of attributes, is presented. This approach leads to lower shrinkage of relevant variables than the commonly used LASSO method.

The high dimensionality and size of our data requires computationally advanced methods of data processing and optimization. This talk will showcase the technology necessary for effective model implementation and share experiences with the benefits and limitations of each. We also present an approach utilizing Spark in conjunction with Stochastic Gradient Descent to effectively scale our solution when the data is too large for single node computations.

Thursday, March 8: Break-Out Room #1

Introduction to Lighthouse Studio

Thursday (8:30 AM - 9:10 AM)

Gary Baker, Sawtooth Software, Inc.

Jon Heaton, Sawtooth Software, Inc.

Come see what Sawtooth Software’s general survey development platform can do! Although Lighthouse Studio is best known for its CBC and MaxDiff components, there is much more that you can do in Lighthouse. We’ll show the general survey question types, demonstrate skip logic, constructed lists (piping), randomizations, rotations, and looping. If you’ve wondered if you can use Lighthouse Studio to do all your general survey work, come bring your questions and see what it can do!

JavaScript and Lighthouse Studio

Thursday (11:05 AM - 11:40 AM)

Justin Luster, Sawtooth Software, Inc.

Lance Adamson, Sawtooth Software, Inc.

You can become a much more powerful Lighthouse Studio user if you understand some JavaScript. JavaScript allows you to modify and customize your surveys in powerful ways. Come learn a bit of JavaScript and instantly create more powerful surveys!

Perl and Lighthouse Studio

Thursday (10:30 AM - 11:15 AM)

Justin Luster , Sawtooth Software, Inc.

You can become a much more powerful Lighthouse Studio user if you understand some Perl programming. Perl allows you to modify and customize your surveys in powerful ways. Come learn a bit about Perl and instantly create more powerful surveys!

Enhancing Your Surveys Using the Question Library

Thursday (11:20 AM - 12:00 PM)

Nathan Bryce, Sawtooth Software, Inc.

Zachary Anderson, Sawtooth Software, Inc.

Do you have a bank of questions that you use repeatedly in your surveys? Or have you created a customized question with HTML, JavaScript, or CSS and you want to reuse it in other surveys? Or do you wish you had access to a library of customized questions that others have written, such as star ranking, highlighting, autocomplete, calendar widgets, image pop-ups, or recording the latitude and longitude of a respondent? Attend this session to learn how to use the time-saving Question Library feature of Lighthouse Studio and the corresponding Community Question Library on the Sawtooth Software website.

Spice Up Your Surveys in Lighthouse Studio

Thursday (1:30 PM - 3:00 PM)

Saurabh Aggarwal, Knowledge Excel Services

Megan Peitz, Sawtooth Software, Inc.

The success of research depends on quality of data, which further depends on willingness of respondent to answer the survey. Research shows that only about 20% of research participants enjoy the survey experience.

Join us as we showcase different ways to spice up your survey within Lighthouse Studio. This session include live demos of Gamification, Interactive Survey techniques and even Virtual Reality. The look and feel of your survey can really make a difference in increasing respondent’s engagement & interest while answering the survey.

Have a specific query? Let us know and we’ll tailor our presentation to your requests.

Front End JavaScript Libraries in Lighthouse Studio

Thursday (3:30 PM - 4:10 PM)

Lance Adamson , Sawtooth Software, Inc.

Adding custom, interactive components to your studies can be intimidating. Already shipped in Lighthouse studio are two JavaScript libraries, used by Sawtooth Software developers, that are designed to do the hard work of creating front end widgets for you. Learn to leverage these libraries to create components like calendar based date selectors, auto-complete text inputs, sliders, and carousels that will give your studies a little extra flare.

Free Format CBC Questions in Lighthouse 9.4

Thursday (4:15 PM - 5:00 PM)

Lance Adamson, Sawtooth Software, Inc.

Learn to create for yourself what we haven't yet created for you! We'll go over strategies that will help you avoid and easily find bugs in your code, discuss using third-party tools and libraries to make programming less intimidating, and go step-by-step through the process of creating custom question types. Though by no means required, you might consider attending the previous breakout sessions on JavaScript, Perl, jQueryUI and Owl Carousel because here you'll see how you can bring all your skills together to get the question you want, exactly how you want it.

Thursday, March 8: Break-Out Room #2

Word Import into Lighthouse Studio for Rapid Setup of Survey Questions

Thursday (8:30 AM - 9:10 AM)

Zachary Anderson, Sawtooth Software, Inc.

Come see how you can quicken the process of creating and editing surveys using Word Import, a new feature introduced recently in Lighthouse Studio. Define texts, questions, response options, skips, and more in a simple Word document, then import everything into Lighthouse Studio with the click of a button.

Introduction to R for Marketing Researchers

Thursday (9:15 AM - 10:00 AM)

Chris Chapman, Google Cloud

Kenneth Fairchild, Sawtooth Software, Inc.

This 45-minute session is for those who are interested in a high-level introduction to R. We’ll address questions such as: what is R? Is it a statistics program or a programming language? How does one learn R? What is it good for? What are some reasons to use it, and not to use it? We’ll illustrate these with brief demonstrations of Bayesian regression models and automated reporting in R.

Bandit MaxDiff in Lighthouse Studio

Thursday (10:30 AM - 11:15 AM)

Kenneth Fairchild, Sawtooth Software, Inc.

Zachary Anderson, Sawtooth Software, Inc.

Bandit MaxDiff learns from previous respondents to oversample the “stars” and undersample the “dogs,” which dramatically increases the precision for identifying top items of importance for the sample. Bandit MaxDiff can handle hundreds of items with data collection savings of 75% to 80% relative to standard sparse MaxDiff. Bandit MaxDiff also can be used for typical studies involving 10 to 30 items, where each item is shown at least 2x for each respondent, but the best few items (based on prior respondents) are shown 4x or 5x times to each respondent. Come see how easy it is to program Bandit MaxDiff surveys in Lighthouse Studio!

Intro to ACBC

Thursday (11:20 AM - 12:00 PM)

Aaron Hill , Sawtooth Software, Inc.

If you are new to conjoint analysis, Adaptive Choice-Based Conjoint (ACBC) can seem a bit intimidating. ACBC can handle conjoint surveys with lots of attributes, complex pricing, and small samples, and create efficient models without overtaxing respondents. This session will introduce you to the ACBC methodology and explore the features and components that make this tool unique. We will demonstrate the different sections of the ACBC survey, show how to create an ACBC exercise, and discuss some “Best Practices” to make sure your next ACBC project is a success.

Discover CBC & MaxDiff

Thursday (1:30 PM - 2:10 PM)

Justin Luster, Sawtooth Software, Inc.

Discover is a web-based application that makes conjoint analysis easier than ever before. In this session we will show you how to create, field, and analyze choice-based conjoint and MaxDiff surveys. We will show you all of the intuitive features of Discover.

Lighthouse Choice Simulator

Thursday (2:15 PM - 3:00 PM)

Brian McEwan, Sawtooth Software, Inc.

Walt Williams, Sawtooth Software, Inc.

Join us for a walkthrough of the new conjoint simulator. We’ll show how to create a new project and many of the new features we’re working on. We’ve revamped the main view to allow for multiple, concurrent simulations with a host of new tricks. Additional improvements include greater control over sensitivity and optimization searches, visual displays of utilities, importances, and simulation results, and better options (than SMRT) for incorporating product availability, awareness, and external effects.

Full Rankings MaxDiff

Thursday (3:30 PM - 4:10 PM)

Kees van der Wagt, SKIM Group

The research industry is changing towards faster and cheaper. Full-ranked MaxDiff could help. We will show if one can get away with fewer tasks by asking full rank per task, instead of “just” best/worst. In addition, we will show different ways of modeling (full-rank) MaxDiff (“standard”, exploded pairs, with scale parameters).

Using both artificial and real datasets, this session will show:

  • How to best model (full-rankings) MaxDiff?
  • Does fancy coding/modeling outperform standard modeling?
  • Does additional data per task help in reducing the amount of tasks and/or respondents?

How to Deliver a Winning Conjoint Analysis Report (Best Practices & Results)

Thursday (4:15 PM - 5:00 PM)

Megan Peitz , Sawtooth Software, Inc.

Every conjoint analysis project is unique in its own way. And at Sawtooth Software, we’ve seen quite a few! Join us as we cover some best practices for making your project a success from beginning to end. Topics will include what questions to ask your client, how to avoid pitfalls, and what to look for in the results. We’ll even cover reporting strategies that will enhance your presentations and have your clients coming back for more!

Friday, March 9: Break-Out Room #1

Which Conjoint Method Should I Use?

Friday (8:30 AM - 9:10 AM)

Aaron Hill, Sawtooth Software, Inc.

This session will introduce the many conjoint and discrete choice analysis options offered by Sawtooth Software and help you determine when it is appropriate to use each method. Example case studies will illustrate various outcomes achieved with different conjoint approaches.

Optimizing Conjoint Analysis for Mobile

Friday (9:15 AM - 10:00 AM)

Femke Hulsbergen, SKIM Group

Joost van Ruitenberg, SKIM Group

More respondents are completing questionnaires via phone or tablet. This is an opportunity because reaching them has become much easier. The challenge is to fit the survey on the mobile screen, particularly when conducting conjoint research. In this research we will explore a new way of making the conjoint mobile-proof by reducing complexity and allowing for engaging swiping techniques. To do so, we will test a mixed design (partial-profile tasks with 3 concepts & full-profile tasks with 2 concepts), showing the concepts dynamically. We will compare the results with other mobile users and PCs/laptops.

Avoiding Common Pitfalls in Conjoint Analysis

Friday (10:30 AM - 11:00 AM)

Brian McEwan, Sawtooth Software, Inc.

Come take advantage of decades of experiencing working with Sawtooth Software customers from our technical support team to learn about common pitfalls and how to avoid them. This class is geared towards beginners and those with a few studies under their belts. We will cover topics from attributes and levels, experimental designs, to fielding your survey and running analysis.

Beyond the Basics with MaxDiff

Friday (11:05 AM - 11:40 AM)

Megan Peitz, Sawtooth Software, Inc.

Join us for a deep dive into advanced MaxDiff concepts including different approaches to handling large items sets and the pros and cons of anchoring. From there, you will learn what you can do with those results, including conducting a latent class analysis, using a TURF simulator, exploring the overlap of items, and techniques for visualizing the results in a report. Whether you are relatively new to the technique or already quite experienced, this session will provide useful tools and tricks that your clients will be glad you learned!