The alternate search strategy produced the desired result: by
searching in a region where the approximation indicated there might be
a minimizer *and* where we realized that we "knew" very little
about the objective, we were able to identify the global minimizer.
So how do we formalize this notion?

In our paper we propose combining these two goals into a single
objective, which we refer to as a *merit* function. The idea is to
balance these two goals much as one does for a multiobjective
optimization problem or (possibly) as a way to handle constraint
violations.

One possibility is to use a merit function of the form

wherem_{c}(x) =a_{c}(x) -p_{c}d_{c}(x),

where the minimum ind_{c}(x) = min ||x-x||_{i}_{2},

We still use the approximation *a* to predict candidate minimizers
for the "true" objective, but we now weight our predictions
based on how close we are to known information about the objective.
Our choice of *p*_{c} dictates how much emphasis we
will place on learning more about the objective in regions for which
we have no samples versus emphasizing the rapid identification of a
(local) minimizer. For our example, we have kept
*p*_{c} constant, but this is a quantity that can be
varied at each iteration and, in fact, should eventually tend to zero
in order to ensure convergence to a local solution.

So how well does this work? The following sequence repeats our example but
now uses *m*_{c}(*x*) (shown in green) to choose
candidates to minimize the objective. For this example, we used
*p*_{c} = 3. (There is no real motivation for this
choice of *p*_{c}; it was the first value tried and it
happened to work particularly well for this example.)

**Optimization Using Approximations:**

** Next:** Discussion of a Family of Merit Functions
** Previous:** An Alternate Outcome