Pattern search methods are a class of optimization methods for solving nonlinear optimization problems. Their main attraction lies in the fact that they neither require nor approximate derivatives (sensitivities) and yet they enjoy robust global convergence properties analogous to those for the more sophisticated quasi-Newton methods.
From a practitioner's perspective, pattern search methods have numerous advantages in addition to their robustness and the fact that they require no sensitivities. First and foremost, they are straightforward to implement and easy to customize. They are very well-suited for parallel computation and have no obvious limit on their scalability. They also can be used in the absence of numerical objectives and constraints: all a user needs to do is express a preference for one alternative (design) over another. Another feature of pattern search methods is that because they search in multiple directions, some of which may be up-hill directions, they can be useful in situations where there are a great many local minimizers because they are less likely to get trapped at a local minimizer and thus more likely to explore the design space. Finally, the structure that underlies these methods suggest variants for solving problems with discrete variables.
We will present a survey of these techniques and a discussion of the available software.