Sawtooth Software: The Survey Software of Choice

Volumetric Choice Experiments

  • Jordan Louviere, Research Professor, Marketing, University of South Australia
    Chief Research Scientist, Strategy Analytics, Newton, MA
    Co-Founder, ChoiceFlows, Raleigh, NC
  • Tiago Ribeiro, Indera, Porto, Portugal
    Co-Founder, ChoiceFlows, Raleigh, NC
  • Richard Carson, Professor of Economics, University of California San Diego
    Co-Founder, ChoiceFlows, Raleigh, NC

We gratefully acknowledge SSHRC data collection funding and assistance from:

  • Towhidul Islam, Research Professor, Marketing, Guelph University
  • Tony Marley, Emeritus Professor, Psychology, University of Victoria (BC)

Many decisions individuals make involve how many times to undertake an activity or how many units of a product to purchase. This can be viewed as a volume of activity during a given time period or fixed set of occasions. Of course, repeated discrete choice behaviors underlie count data processes, but there is substantially more information in count data than in traditional discrete choices. Indeed, researchers frequently use choice models to analyse volumetric choice data by treating the different numbers of units chosen as alternatives in models. In this talk we discuss how observing the outcomes from a count data process allows us to overcome several potential problems with discrete choice data. Of particular interest is that count data allow us to address several issues related to scale. Scale issues matter because of growing evidence that the behavior of many individuals is effectively deterministic, which undermines the ability to estimate popular choice models focused on understanding individual level behaviour.

We discuss how our original motivation for looking at count processes arose out of several discrete choice experiments (DCEs) and a new test designed to isolate different patterns of deterministic behaviour and their prevalence. We report some of those findings today to illustrate why some popular ways of modelling individual choice behavior like MIXL and latent class models may be problematic. We discuss how count data models solve many of these problems by separating preference parameters from scale and deterministic behaviors; so, instead of being problematic, it leads to very tight predictions. We believe that this should help shift the design objectives for DCEs to maximizing external validity.

We discuss several count data models: a) Poisson regression using maximum likelihood, b) quasi-maximum likelihood Poisson regression that does not require the tight mean-variance link of the standard Poisson model, c) negative binomial regression, d) truncated count processes, e) zero and one-inflated count processes and f) count models with random parameters and latent classes.

We introduce experimental designs for volumetric choice problems and discuss their properties. In contrast to the extensive literature on experimental design for DCEs, the literature on optimal experimental design for count processes is very sparse, if not non-existent.

We provide several empirical examples to help illustrate the key issues involved in collecting and modelling volumetric choices. We also consider some preliminary tests of external validity.


This one-hour talk discusses or touches on the following topics:

  • Many ways of designing DCEs developed since 1999 may produce experimental artefacts (known as “demand characteristics”). That is, the designs may induce artificial behavior, unrepresentative of what consumers “really do”. In turn this may lead to biases and results that do not generalize across replications and/or closely related DCEs.
  • We illustrate several types of behavioral artefacts, with a particular focus on perfectly deterministic choice strategies.
  • We discuss ways to test the properties of designs that underlie such artificial behaviors, and discuss a forthcoming study of over 120 DCE designs that suggests that assumptions one makes to generate designs can produce some potentially misleading outcomes. We also discuss why and how one needs to control for design differences, including the necessity of having an appropriate “baseline” comparison.
  • We discuss why the long-standing designs proposed by Louviere & Woodworth (1983) do not seem to have these problems.
  • We discuss why it is imperative that the focus on design shift to maximizing external validity and why it is important for academic & applied researchers to test external validity as often as possible.
  • We then introduce experimental designs for quantity (volumetric) choice problems and discuss their properties. We note that it may be possible to test and compare discrete choice and quantity choice experiments, and why such comparisons can shed light on whether these types of choices and associated designs induce artefacts or lack sufficient information to allow one to determine whether an observed behaviour is “real” or artificial.
  • We use several empirical examples to illustrate quantity choice experiments.
  • We discuss statistical models consistent with quantity choices and the economic theory of demand, which include count data models and censored & truncated regression models. We illustrate applications of these models.
  • We provide some preliminary tests of external validity.

We open the floor to discussion and potential solutions and ways forward.