Virginia Torczon and Michael W. Trosset
Optimization problems that arise in engineering design are often characterized by several features which hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often require the solution of differential equations, a process which is itself computationally intensive. Within a standard nonlinear optimization algorithm, the complete solution of these differential equations is required for each iteration of the algorithm.
Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute .
Engineers draw a crucial distinction between surrogate models and surrogate approximations. Surrogate models are auxiliary simulations that are less physically faithful, but also less computationally expensive, than the expensive simulation that is regarded as ``truth.'' One example would be the use of an equivalent-plate analysis method in lieu of a finite element analysis (for instance, to analyze a wing-box of a high-speed civil transport ). Surrogate approximations are algebraic summaries obtained from previous runs of the expensive simulation. Examples include the lower-order models favored in response surface methodology (RSM)  and the kriging estimates employed in the design and analysis of computer experiments (DACE) [13,3]. Surrogate models exist independently of the expensive simulation and can provide new information about the physical phenomenon of interest without requiring additional runs of the expensive simulation. On the other hand, since a surrogate approximation is constructed from values of the objective that have already been computed, it provides no new information about the physical phenomena being studied. Once a surrogate approximation has been constructed, it is not expensive to evaluate it at new input settings.
We will consider a methodology that constructs a sequence of approximations by kriging known values of the objective function . However, many of the optimization strategies we propose also extend with little or no modifications to surrogates derived from models rather than from algebraic approximations.
When surrogates are available, be they models or approximations, the optimizer hopes to use them to facilitate the search for a solution to the engineering optimization problem. Our ultimate goal is to design robust optimization algorithms, but we would like to do so in ways that allow us to make effective use of the information that good surrogates can provide. Toward this end, we adopt the perspective that the surrogate can be used to accelerate the optimization technique by exploiting the trends that such surrogates tend to identify [8,1,18,4,5,6,14]. We do not worry about accuracy in the solution until it becomes clear either that the optimization technique is in the neighborhood of a minimizer or that the surrogate is not doing a sufficiently good job of identifying trends in the objective.
The particular approach that we will discuss makes use of pattern search techniques . These simple, flexible optimization techniques are easily amended to exploit surrogates [18,5,6,14], can handle functions that are nondifferentiable or for which derivatives are difficult or too expensive to attain , and are easily adapted to either a parallel or distributed computing environment [9,15]. Pattern search methods also are less likely to be trapped by non-global minimizers than are traditional nonlinear optimization algorithms. Furthermore, recent analysis has begun to extend their applicability to optimization problems with constraints [10,11].
Our approach  synthesizes recent ideas from both the numerical optimization and the computer experiments literature. Given a limited budget of expensive function evaluations that are to be used to solve an engineering optimization problem, we consider how to manage the trade-off between the expense of approximation and the expense of optimization. We believe that one should invest only a modest proportion of the original budget in constructing the initial approximation and that one should use the optimization procedure to help decide when and where further sampling should occur.
An obvious concern is the relation between the upper bound on the number of expensive function evaluations allowed, which we will call V, and the dimension n of the problem to be solved. When V is large relative to n, say n = 10 and V = 5000, then the expense of function evaluation is not necessarily an issue and we are content to rely on traditional optimization techniques. When V is not large relative to n, say n = 10 and V = 20, then the emphasis must be on trying to construct a simple approximation from what little information one has about the objective--there can be no hope of successfully applying an iterative optimization technique to the problem. (If V is less than n, then the methodologies we consider are inappropriate.) We are concerned with intermediate situations.
Our experience  shows that managing approximations, which should be updated as optimization progresses, engenders a somewhat different set of issues than managing models, each of which remains static as optimization progresses. Earlier work did not envision the sequential updating of a surrogate approximation, yet this sequential updating has proved to be the defining characteristic of much of our current research. We now understand that the sequential updating of surrogate approximations often creates difficulties that are not encountered when either conventional optimization or the construction of the approximation are employed separately. In particular, the sequential updating plays a role in the quality of the surrogate approximation.
At the time of this submission we have a general description of our proposed approach , convergence analysis to establish the rigor of this approach [14,6], and results both for some simple test problems [18,14] and for a rotor blade design problem from Boeing Helicopters  that establish the validity of the approach we advocate. The final paper will address the introduction of merit functions that explicitly recognize the desirability of improving the current approximation to the expensive simulation during the course of the optimization. We will define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our approach without sacrificing any of its rigor.