Optimization
Not too suprisingly--since that was the purpose of the
construction--we see the same sequence of iterates that we saw for the
second example:
- Iteration 1: xc = 0.20, xt = -0.5
- Iteration 2: xc = 0.20, xt = 0.10
- Iteration 3: xc = 0.10, xt = 1.50
- Iteration 4: xc = 0.10, xt = -0.20
- Iteration 5: xc = 0.10, xt = 0.0
- Iteration 6: xc = 0.0, xt = -0.10
- Iteration 7: x* = 0.0
The optimization correctly identifies a
local minimizer of the objective f--which is all
that the analysis guarantees. However,
because of the poor quality of the approximation a in the
region [ 0.833, 1.5 ], the optimization mistakenly assumes that the
global solution is at 0.0 rather than (near) 1.3.
The question then becomes, how to balance our desire to find a
confirmed minimizer with our desire to "know" enough about the
function to avoid missing a potentially better solution simply because
the approximation may not be sufficiently good to predict a better
solution?
Next: An Alternate Outcome:
Previous: Second Example Revisited:
Virginia Torczon
6/18/1998