next previous

Discussion of First Example

Optimization

Consider the sequence of proposed solutions using the interpolation coupled with the optimization. We denote by xc the current best solution for the "true" problem f(x) and by xt the "trial" solution proposed by the approximation a. Alternatively, xt may be a "confirmation point" if the approximation suggests that a minimizer has already been identified.

Notice that while the approximation a is poor, we do not realize any improvement on f; the best solution to the problem of minimizing f is the one identified by one of the two initial sites for sampling f to construct a linear interpolant. But by Iteration 6, the approximation has identified the minimizer. Iteration 7 confirms that the candidate improves upon the current solution for f and, in fact, gives a global solution for the approximation a. Thus, we are left to confirm that this candidate is indeed a minimizer for f. To confirm this, we need the value of the true objective f at the two adjacent grid points. We already have computed the value of f at x = 0.167, so all that remains is to compute the value of f at x = 0.133.

At Iteration 8 we stop with a confirmed (local) minimizer of f (at least to the resolution of the grid) at x* = 0.150 and a decent approximation a to our objective f on the interval [ 0, 0.5 ].

Closing comment

It is often said that if a model of the objective or a constraint function has a weakness, the optimization procedure will find it. Here we see the same phenomenon using approximations of the objective. In the early stages of the optimization process, the approximation predicts a minimizer in precisely those region(s) where we have the least information about the objective: consider Iterations 3, 4, and 5. But by using an iterative process, we can improve the approximation with the eventual goal of using it to successfully predict a minimizer of f.

Next: Second Example Previous: First Example


Virginia Torczon
6/13/1998