
298 16. Particle Swarm Optimization
optimization algorithm to find the optimum. Gehlhaar and Fogel suggest initializing
in areas that do not contain the optima, in order to validate the ability of the algorithm
to locate solutions outside the initialized space [312].
The last aspect of the PSO algorithms concerns the stopping conditions, i.e. criteria
used to terminate the iterative search process. A number of termination criteria have
been used and proposed in the literature. When selecting a termination criterion, two
important aspects have to be considered:
1. The stopping condition should not cause the PSO to prematurely converge, since
suboptimal solutions will be obtained.
2. The stopping condition should protect against oversampling of the fitness. If a
stopping condition requires frequent calculation of the fitness function, compu-
tational complexity of the search process can be significantly increased.
The following stopping conditions have been used:
• Terminate when a maximum number of iterations, or FEs, has been
exceeded. It is obvious to realize that if this maximum number of iterations (or
FEs) is too small, termination may occur before a good solution has been found.
This criterion is usually used in conjunction with convergence criteria to force
termination if the algorithm fails to converge. Used on its own, this criterion is
useful in studies where the objective is to evaluate the best solution found in a
restricted time period.
• Terminate when an acceptable solution has been found. Assume that
x
∗
represents the optimum of the objective function f. Then, this criterion
will terminate the search process as soon as a particle, x
i
, is found such that
f(x
i
) ≤|f(x
∗
) − |; that is, when an acceptable error has been reached. The
value of the threshold, , has to be selected with care. If is too large, the search
process terminates on a bad, suboptimal solution. On the other hand, if is too
small, the search may not terminate at all. This is especially true for the basic
PSO, since it has difficulties in refining solutions [81, 361, 765, 782]. Furthermore,
this stopping condition assumes prior knowledge of what the optimum is – which
is fine for problems such as training neural networks, where the optimum is
usually zero. It is, however, the case that knowledge of the optimum is usually
not available.
• Terminate when no improvement is observed over a number of itera-
tions. There are different ways in which improvement can be measured. For
example, if the average change in particle positions is small, the swarm can be
considered to have converged. Alternatively, if the average particle velocity over
a number of iterations is approximately zero, only small position updates are
made, and the search can be terminated. The search can also be terminated
if there is no significant improvement over a number of iterations. Unfortu-
nately, these stopping conditions introduce two parameters for which sensible
values need to be found: (1) the window of iterations (or function evaluations)
for which the performance is monitored, and (2) a threshold to indicate what
constitutes unacceptable performance.
• Terminate when the normalized swarm radius is close to zero. When