
348 16. Particle Swarm Optimization
proposed to monitor changes in the fitness of the global best position. By monitoring
the fitness of the global best particle, change detection is based on globally provided
information. To increase the accuracy of change detection, Hu and Eberhart [386]
later also monitored the global second-best position. Detection of the second-best and
global best positions limits the occurrence of false alarms. Monitoring of these global
best positions is based on the assumption that if the optimum location changes, then
the optimum value of the current location also changes.
One of the first studies in the application of PSO to dynamic environments came from
Carlisle and Dozier [107], where the efficiency of different velocity models (refer to
Section 16.3.5) has been evaluated. Carlisle and Dozier observed that the social-only
model is faster in tracking changing objectives than the full model. However, the
reliability of the social-only model deteriorates faster than the full model for larger
update frequencies. The selfless and cognition-only models do not perform well on
changing environments. Keep in mind that these observations were without changing
the original velocity models, and should be viewed under the assumption that the
swarm had not yet reached an equilibrium state. Since this study, a number of other
studies have been done to investigate how the PSO should be changed to track dynamic
optima. These studies are summarized in this section.
From these studies in dynamic environments, it became clear that diversity loss is the
major reason for the failure of PSO to achieve more efficient tracking.
Eberhart and Shi [228] proposed using the standard PSO, but with a dynamic, ran-
domly selected inertia coefficient. For this purpose, equation (16.24) is used to select
aneww(t) for each time step. In their implementation, c
1
= c
2
=1.494, and velocity
clamping was not done. As motivation for this change, recall from Section 16.3.1 that
velocity clamping restricts the exploration abilities of the swarm. Therefore, removal
of velocity clamping facilitates larger exploration, which is highly beneficial. Further-
more, from Section 16.3.2, the inertia coefficient controls the exploration–exploitation
trade-off. Since it cannot be predicted in dynamically changing environments if explo-
ration or exploitation is preferred, the randomly changing w(t) ensures a good mix of
focusing on both exploration and exploitation.
While this PSO implementation presented promising results, the efficiency is limited
to type I environments with a low severity and type II environments where the value of
the optimum is better after the change in the environment. This restriction is mainly
due to the memory of particles (i.e. the personal best positions and the global best
selected from the personal best positions), and that changing the inertia will not be
able to kick the swarm out of the current optimum when v
i
(t) ≈ 0, ∀i =1,...,n
s
.
A very simple approach to increase diversity is to reinitialize the swarm, which means
that all particle positions are set to new random positions, and the personal best and
neighborhood best positions are recalculated. Eberhart and Shi [228] suggested the
following approaches:
• Do not reinitialize the swarm, and just continue to search from the current
position. This approach only works for small changes, and when the swarm has
not yet reached an equilibrium state.