
Particle Swarm Optimization
452
problems found in engineering and computer science. PSO was inspired by insect swarms
and has since proven to be a powerful competitor to other evolutionary algorithms such as
genetic algorithms [41].
Comparisons between PSO and the standard Genetic algorithms (another kind of evo-
lutionary algorithms) have been done analytically based on performance in [41]. Com-
pared to Genetic algorithms the PSO tends to converge more quickly to the best solution.
The PSO algorithm simulates social behavior by sharing information concerning the best
solution. An attraction of some sort is formed with these “better” solutions helping improve
their own best solution until all converge to the single “best” solution. Each par- ticle
representing a single intersection of all search dimensions.
The particles evaluate their positions using a fitness that is in the form of a mathemati- cal
measure using the solution dimensions. Particles in a local neighborhood share memo- ries
of their “best” positions; then they use those memories to adjust their own velocities and,
thus, positions. The original PSO formulae developed by Kennedy and Eberhart were
modified by Shi and Eberhart [68] with the introduction of an inertia parameter,
ω
, that
was shown empirically to improve the overall performance of PSO.
The number of successful applications of swarm optimization algorithms is increasing
exponentially. The most recent uses of these algorithms include cluster head identification in
wireless sensor networks in [69], shortest communication route in sensor networks in [9]
and identifying optimal hierarchy in decentralized sensor networks.
In the next section the particle swarm for continuous search spaces is presented. The
continuous particle swarm optimization algorithm has been a research topic for more than
decade. The affect of the parameters on the convergence of the swarm has been well stud-
ied in. The neighborhood topolgies and different variations are presented extensively in. In
this chapter we will focus on the binary and the discrete version of the algorithm. The
binary version of the algorithm is presented in section 2.2 and the discrete version of the
algorithm is presented in section 2.3. The binary and the discrete version of the algorithm-
make the particles search in the probablistic search space. The infinite range of the veloci-
ties are transformed into a bounded probabilistic space. The transformations and the
algorithms are detailed in this chapter. The affect of the parameters are briefly detailed in
this chapter. The performance of these algorithms are presented on simple functions. The
chapter is accompanied by a code written in MATLAB that can be used by the readers.
2. Particle Swarms for Continuous Spaces
The PSO formulae define each particle as a potential solution to a problem in a D-
dimensional space with the i
th
particle represented as
123
( , , ,....... )
iiii iD
xxx x= . Each particle
also maintains a memory of its previous best position, designated as pbest,
123
( , , ,....... )
iiii iD
Pppp p=
, and a velocity along each dimension represented as
123
( , , ,....... )
iiii iD
Vvvv v= . In each generation, the previous best, pbest, of the particle com-bines
with the best fitness in the local neighborhood, designated gbest. A velocity is com- puted
using these values along each dimension moving the particle to a new solution. The portion
of the adjustment to the velocity influenced by the individual’s previous best posi- tion is
considered as the cognition component, and the portion influenced by the best in the
neighborhood is the social component.
In early versions of the algorithm, these formulae are