There are three available algorithms for product searches: Grid, Exhaustive, and Genetic. Each takes a different approach and has different strengths and weaknesses depending upon the problem specification, the data set, and how long the researcher is willing to wait for an answer.

Exhaustive Search is the simplest of the algorithms. It examines every combination of permitted levels of all attributes. For example, if an attribute has three levels (levels 1, 2, 3) and an interpolation interval of 0.5 is used, there would be 5 levels of that attribute to explore (1, 1.5, 2, 2.5, 3).

The main strength of Exhaustive Search is that it is guaranteed to find the best solution (from among the domain specified). If the response surface has multiple peaks, Exhaustive Search will evaluate all of them and choose the best one.

The main shortcoming of Exhaustive Search is that the number of combinations to be evaluated can become very large. For example, with 10 attributes, each having 5 possible levels, the number of solutions to explore would be 5 to the 10th power, or nearly 10 million. Thus Exhaustive Search will probably not be used until faster methods have first reduced the size of the problem.

Grid Search is a "greedy" type fastest-ascent algorithm that proceeds heuristically. A starting solution is chosen at random. Within each iteration the attributes are selected in random order, all permitted levels of each attribute are examined (with all other attributes held constant) and the best level is retained. In each iteration the attributes are examined in a different random order. The process continues until an iteration fails to find a better solution. That is to say, Grid Search stops after a solution has been found that cannot be improved by changing any single attribute.

The main strength of Grid Search is its speed. If the response surface is single-peaked, Grid Search is guaranteed to find the optimum. If there are multiple peaks, then repeated runs from different starting points are very likely to find it. For that reason, Grid by default uses 10 different passes from different random starting points (you can change the number of passes). Grid Search's main shortcoming is that it is not guaranteed to find the global optimum if there are several peaks. However, in our experience Grid Search is very effective at quickly finding good solutions, and thus can be used to reduce the domain required for further exploration by Exhaustive Search.

Genetic Search is our implementation of the procedure described by Balakrishnan and Jacob (1996). By default, we start with a population of size 300, consisting of potential solutions obtained using targeted individual-level searches (optionally, random starting solutions may be generated). In each iteration ("generation") we generate half as many new solutions as the size of our population (300/2=150 if using the defaults) by "mating" the most fit members of the original population. Then, we cull the population to the original population size by discarding the least fit solutions.

Each child has a combination of the parents' characteristics. For non-interpolable variables, the child receives the value of one parent or the other with equal probability, subject to a certain degree of random mutation. For interpolable variables the child's value is a random one, rectangularly distributed between the parents' values. Also, the child's value is subjected to "mutation" by adding a normal random variable.

For continuous variables (where you have specified Is Continuous) on the Attribute Info dialog, Genetic Search often returns optimal or near-optimal solutions that do not fall on the levels included in the study. You can constrain the solutions to fall on the levels or on specific increments between the levels by specifying a step size in the =Range( ) instruction.

For single objective searches (such as maximizing share of preference or profit, but not both), as a measure of success we use the objective value for the best member of that generation. The default is to stop when five successive generations have failed to show improvement, although the user may modify that setting.

A strength of Genetic Search is that it is theoretically possible that the final population may include near-optimal members who are high on different peaks, and in these cases Genetic Search is less vulnerable to local optima. In our initial work, we have seen this only occasionally, and for these unusual cases Genetic Search consistently returns good solutions. For the most part we find that the faster "steepest assent" methods (e.g. Grid) also consistently achieve near-optimal solutions.