Search Algorithms

Top  Previous  Next



There are three available algorithms for product searches: Grid, Exhaustive, and Genetic (coming soon). Each takes a different approach and has different strengths and weaknesses depending upon the problem specification, the data set, and how long the researcher is willing to wait for an answer.




Exhaustive Search is the simplest of the algorithms. It examines every combination of permitted levels of all attributes. For example, if an attribute has three levels (levels 1, 2, 3) and an interpolation interval of 0.5 is used, there would be 5 levels of that attribute to explore (1, 1.5, 2, 2.5, 3).  


The main strength of Exhaustive Search is that it is guaranteed to find the best solution (from among the domain specified). If the response surface has multiple peaks, Exhaustive Search will evaluate all of them and choose the best one.  


The main shortcoming of Exhaustive Search is that the number of combinations to be evaluated can become very large. For example, with 10 attributes, each having 5 possible levels, the number of solutions to explore would be 5 to the 10th power, or nearly 10 million. Thus Exhaustive Search will probably not be used until faster methods have first reduced the size of the problem.  




Grid Search is a "greedy" type fastest-ascent algorithm that proceeds heuristically. A starting solution is chosen at random. Within each iteration the attributes are selected in random order, all permitted levels of each attribute are examined (with all other attributes held constant) and the best level is retained. In each iteration the attributes are examined in a different random order. The process continues until an iteration fails to find a better solution. That is to say, Grid Search stops after a solution has been found that cannot be improved by changing any single attribute.  


The main strength of Grid Search is its speed. If the response surface is single-peaked, Grid Search is guaranteed to find the optimum. If there are multiple peaks, then repeated runs from different starting points are very likely to find it. For that reason, Grid by default uses 10 different passes from different random starting points (you can change the number of passes).  Grid Search's main shortcoming is that it is not guaranteed to find the global optimum if there are several peaks. However, in our experience Grid Search is very effective at quickly finding good solutions, and thus can be used to reduce the domain required for further exploration by Exhaustive Search.  


Genetic (coming soon)


Genetic Search is our implementation of the procedure described by Balakrishnan and Jacob (1996). We start with a population of size 300, consisting of potential solutions obtained randomly. In each iteration ("generation") the least "fit" half of the population is replaced with new members obtained by random "mating" of the most fit 150 members.  


Each child has a combination of the parents' characteristics. For non-interpolable variables, the child receives the value of one parent or the other with equal probability, subject to a certain degree of random mutation.  For interpolable variables the child's value is a random one, rectangularly distributed between the parents' values. Also, the child's value is subjected to "mutation" by adding a normal random variable.


For interpolable variables, Genetic Search often returns optimal or near-optimal solutions that do not fall on the increments included in the study or on the increments allowed by the interpolation step size.  


Genetic Search also makes use of a procedure known as "snapping." To accelerate convergence, if any attribute level is close to a whole number we "snap" it by setting it exactly equal to that whole number. In the first iteration we use a broad snapping interval, converting any value within 0.2 of a whole number to that whole number. The interval is decreased in each subsequent iteration, and has little impact in later iterations.  


As a measure of success we use the objective value for the best member of that generation. The default is to stop when three successive generations have failed to show improvement, although the user may modify that setting.  


A strength of Genetic Search is that it is theoretically possible that the final population may include near-optimal members who are high on different peaks, and in these cases Genetic Search is less vulnerable to local optima. In our initial work, we have seen this only occasionally, and for these unusual cases Genetic Search consistently returns good solutions. For the most part we find that the faster "steepest assent" methods (e.g. Grid) also consistently achieve near-optimal solutions.

Page link: