So let's consider an alternate outcome.
In the previous example, we reach a point in the optimization process (Iteration 6) where the approximation suggests we have found a local minimizer of the objective--which happens to be a global minimizer of the current approximation. Using the simple strategy with which we started, we then proceed to evaluate nearby points to confirm that we have a local minimizer. We ignore indications that there may be another (local) minimizer of the objective in the region [ 5/6, 1.5]--which happens to be a region where we know very little about the objective.
Instead of being so single-minded in our pursuit of a single global minimizer, what if as an alternative we investigate the other minimizer suggested by the approximation in recognition of the fact that we do not know as much about the objective in that region as might be advisable? Then what happens?
Optimization Using Approximations:
Next: A Family of Merit Functions
Previous: Discussion of Second Example, Revisited