
33
2.6.7 Drawbacks of PSO
PSO and other stochastic search algorithms have two major drawbacks [Løvberg
2002]. The first drawback of PSO, and other stochastic search algorithms, is that the
swarm may prematurely converge (as discussed in Section 2.5.6). According to
Angeline [Evolutionary
1998], although PSO finds good solutions much faster than
other evolutionary algorithms, it usually can not improve the quality of the solutions
as the number of iterations is increased. PSO usually suffers from premature
convergence when strongly multi-modal problems are being optimized. The rationale
behind this problem is that, for the gbest PSO, particles converge to a single point,
which is on the line between the global best and the personal best positions. This point
is not guaranteed to be even a local optimum. Proofs can be found in Van den Bergh
[2002]. Another reason for this problem is the fast rate of information flow between
particles, resulting in the creation of similar particles (with a loss in diversity) which
increases the possibility of being trapped in local optima [Riget and Vesterstrøm
2002]. Several modifications of the PSO have been proposed to address this problem.
Two of these modifications have already been discussed, namely, the inertia weight
and the lbest model. Other modifications are discussed in the next section.
The second drawback is that stochastic approaches have problem-dependent
performance. This dependency usually results from the parameter settings of each
algorithm. Thus, using different parameter settings for one stochastic search algorithm
result in high performance variances. In general, no single parameter setting exists
which can be applied to all problems. This problem is magnified in PSO where
modifying a PSO parameter may result in a proportionally large effect [Løvberg
2002]. For example, increasing the value of the inertia weight, w, will increase the
speed of the particles resulting in more exploration (global search) and less