Section 5.3.2: Non-gradient methods

One-dimensional search methods based only on comparison of function values

Use when derivative f'(x) is inconvenient or expensive to evaluate, or is not available.

Unbounded search

Always pose the optimisation problem with constraints!

But if a local minimum is suspected within a much smaller region than defined by lower and upper bounds on x, then it may be worthwhile starting with an unrestricted search to establish a tighter bounded search region.

Basic idea


See section 5.3.2.1 for algorithm for unbounded search.

Acceleration provides a compromise between taking many short steps, and finding a very large bounded search region.

This version of the algorithm is slightly complicated because we assume there are bounds in the problem.

Example

Minimise f(D) = 2.0 D + 0.4 + 0.5 D-4.75 s.t. $ D \geq 0.1$, $D \leq 2.5 $.

DATA:

Lower bound for diameter 0.1 m
Upper bound for diameter 2.5 m
Starting value for diameter 0.2 m
Initial step in diameter 0.1 m
Step acceleration factor 2.0


D = 0.1 cost = 28117.666
D = 0.2 cost = 1045.707
D = 0.3 cost = 153.280
D = 0.5 cost = 14.854
D = 0.9 cost = 3.024
D = 1.7 cost = 3.840

RESULT:
Lower bound for bounded search 0.5 m
Upper bound for bounded search 1.7 m

Bounded search

When a region is known to contain a minimum for the problem, apply an iterative strategy for interval reduction.

Systematically exclude one or more sub-intervals in which the optimum does not lie. Thereby create a new interval for the next iteration.

Terminate when

Methods

1.
Third-interval elimination
2.
Half-interval elimination
3.
Golden section search

Third-interval elimination

Divide interval into 3 equal sub-intervals.


Symmetric $\Longrightarrow$ no preference for one end of the interval or the other.

Eliminate the sub-interval which is not adjacent to the minimum f.

Cost: 2 function evaluations per iteration
Benefit: Reduce search range by 33.3 % at next iteration.

Drawback: the interior point already calculated cannot be re-used within the new interval.


Half-interval elimination

Divide interval into 4 equal sub-intervals.


Eliminate the two sub-intervals which are not adjacent to the minimum f.

Cost: 2 or 3 function evaluations per iteration
Benefit: Reduce search range by 50 % at next iteration.

Advantage: after the first iteration which needs 3 function evaluations, the middle point in the surviving two sub-intervals can be re-used. Thus rapidly becomes more efficient than `third-interval' method.

Golden section search

Divide interval into 3 sub-intervals.
The sub-intervals are symmetric, but not equal.


Eliminate the sub-interval which is not adjacent to the minimum f.

Then ensure that sub-intervals for the new interval of length L' can be constructed with the same relative proportions as those in the previous iteration.

This
(a) permits re-use of one point from the previous iteration;
(b) does not prejudice the rest of the search by changing the relative sub-interval sizes.

To achieve this condition, $\alpha$ must satisfy

\begin{displaymath}\alpha^2 - 3 \alpha + 1 = 0 \end{displaymath}

whose only root between 0 and 1 is

\begin{displaymath}\alpha = \frac{3 - \sqrt{5}}{2} = 0.381966 \end{displaymath}

Cost: 1 or 2 function evaluations per iteration
Benefit: Reduce search range by 38.2 % at next iteration.

Advantage: after the first iteration which needs 2 function evaluations, the point within the surviving sub-interval can be re-used. The method was designed to achieve this.

Golden section is the most efficient method which makes no assumption (based on gradient information, for example) about where in the search interval the minimum is likely to be.

Comparison of methods

Reduce search range to 0.001 of initial length. Let this process take Ni iterations.

Third-interval

\begin{displaymath}\left(\frac{2}{3}\right)^{N_i} = 10^{-3} \end{displaymath}


\begin{displaymath}N_i = 17.04 \rightarrow 18 \end{displaymath}

Function evaluations = 2 Ni = 36

Half-interval

\begin{displaymath}\left(\frac{1}{2}\right)^{N_i} = 10^{-3} \end{displaymath}


\begin{displaymath}N_i = 9.97 \rightarrow 10 \end{displaymath}

Function evaluations =3 + 2 (Ni - 1) = 21

Golden section

(1 - 0.3820)Ni = 10-3


\begin{displaymath}N_i = 14.35 \rightarrow 15 \end{displaymath}

Function evaluations =2 + 1 (Ni - 1) = 16

Example

Continue with the pipe diameter optimisation.

Start bounded search with 0.5 < D < 1.7 m.

Set diameter tolerance to 0.001m.
Use Golden Section. See section 5.3.2.2

Summary

When no information is available except comparison of function values,



Return to Section 5.3 Index
Return to Section 5 Index

Course Organiser
Last modified: Tue Aug 25 12:16:41 BST