next previous

A Family of Merit Functions

The alternate search strategy produced the desired result: by searching in a region where the approximation indicated there might be a minimizer and where we realized that we "knew" very little about the objective, we were able to identify the global minimizer. So how do we formalize this notion?

In our paper we propose combining these two goals into a single objective, which we refer to as a merit function. The idea is to balance these two goals much as one does for a multiobjective optimization problem or (possibly) as a way to handle constraint violations.

One possibility is to use a merit function of the form

mc(x) = ac(x) - pcdc(x),
where pc >= 0 and
dc(x) = min ||x - xi||2,
where the minimum in dc is computed over all points xi at which we know the value of the true objective. Thus, dc(x) is the distance from x to the nearest previous design site.

We still use the approximation a to predict candidate minimizers for the "true" objective, but we now weight our predictions based on how close we are to known information about the objective. Our choice of pc dictates how much emphasis we will place on learning more about the objective in regions for which we have no samples versus emphasizing the rapid identification of a (local) minimizer. For our example, we have kept pc constant, but this is a quantity that can be varied at each iteration and, in fact, should eventually tend to zero in order to ensure convergence to a local solution.

So how well does this work? The following sequence repeats our example but now uses mc(x) (shown in green) to choose candidates to minimize the objective. For this example, we used pc = 3. (There is no real motivation for this choice of pc; it was the first value tried and it happened to work particularly well for this example.)

Optimization Using Approximations:


Next: Discussion of a Family of Merit Functions Previous: An Alternate Outcome


Virginia Torczon
6/30/1998