Global Best PSO (GBestPSO)

The GBestPSO is the canonical version of the PSO. It is popular, not only, because it is the original version of the algorithm (which is cited often within literature), but is also a simple algorithm to implement.

As with all algorithms modelled as a function, the type of the GBestPSO is simply defined as:

List[Particle[S,A]] => RVar[List[Particle[S,A]]]

where a collection of entities are transformed from a given set of entities to a new collection of entities, with randomness applied. This process is then repeatedly reapplied, until a stopping condition is reached.

We’re going to exclude the import statements simply for brevity, but the reader is encouraged to examine the example algorithm definition in the examples sub-module of the project source.

Getting things ready

In order to define an experiment, there are a couple of things we need to get ready first. The most obvious should be that there needs to be some kind of problem, upon which we will be executing the GBestPSO.

As the very first step, we need to get the needed imports in scope:

import cilib._
import cilib.pso._
import cilib.pso.Defaults._

import scalaz.NonEmptyList
import scalaz.effect._
import scalaz.effect.IO.putStrLn
import scalaz.std.list._
import spire.implicits._
import spire.math.Interval

import cilib.syntax.algorithm._

import scalaz._
import Scalaz._

Let’s define a simple problem, borrowing the problem definition from the benchmarks sister project.

scala> val spherical = Eval.unconstrained(cilib.benchmarks.Benchmarks.spherical[NonEmptyList, Double]).eval
spherical: cilib.RVar[scalaz.NonEmptyList[Double] => cilib.Objective[Double]] = cilib.RVar$$anon$3@3a2925ce

Here we define a value called spherical, which is an unconstrained Eval instance, which uses the spherical function definiton from the benchmarks project. We explicitly provide the needed type parameters to keep the compiler happy, that being that the Position is a NonEmtpyList[Double].

Next, we define the GBestPSO itself. The GBestPSO is defined to use a velocity update equation that uses the personal best of the current particle and then the collection’s current best particle to determine the new velocity vector for the current particle within the algorithm.

Let’s define the two “particle attractors” which we need in the velocity update equation. Because these two values will attract or guide the particle in the search space, we refer to them as Guide instances:

scala> val cognitive = Guide.pbest[Mem[Double],Double]
cognitive: cilib.pso.Guide[cilib.Mem[Double],Double] = cilib.pso.Guide$$$Lambda$10368/705768389@51ea2db6

scala> val social    = Guide.gbest[Mem[Double]]
social: cilib.pso.Guide[cilib.Mem[Double],Double] = cilib.pso.Guide$$$Lambda$10370/118927025@16d5289a

Again, we need to provide some type parameters to keep the compiler happy, but in this case we need to provide a type called Mem[Double], which is needed to track the memory of a particle and at the same time, fulfills the function constraints of the PSO algorithm itself: that the algorithm participants must cater for a HasMemory instance which exists for the Mem[Double] type.

Now we can define the algorithm itself, providing some constants that are known to provide convergent behaviour within the PSO:

scala> val gbestPSO = gbest(0.729844, 1.496180, 1.496180, cognitive, social)
gbestPSO: List[cilib.pso.Particle[cilib.Mem[Double],Double]] => (cilib.pso.Particle[cilib.Mem[Double],Double] => cilib.Step[Double,cilib.pso.Particle[cilib.Mem[Double],Double]]) = cilib.pso.Defaults$$$Lambda$10371/97135235@1b499f69

scala> val iter = Iteration.sync(gbestPSO)
iter: scalaz.Kleisli[[β$0$]cilib.Step[Double,β$0$],List[cilib.pso.Particle[cilib.Mem[Double],Double]],List[cilib.pso.Particle[cilib.Mem[Double],Double]]] = Kleisli(cilib.Iteration$$$Lambda$10372/935735127@2b80a4f)

Now that the algorithm is defined, lets now define the entity collection that we need to given the algorithm instance. The collection defines the bounds for out problem and also defines how the entity instances will be initialized, once random positions are generated for the given problem space

scala> val swarm = Position.createCollection(PSO.createParticle(x => Entity(Mem(x, x.zeroed), x)))(Interval(-5.12,5.12)^30, 20)
swarm: cilib.RVar[List[cilib.pso.Particle[cilib.Mem[Double],Double]]] = cilib.RVar$$anon$2@61465ee1

The last two requirements we have is to define how we want to perform the optimization and then to provide the RNG instance that will use used within the algorithm. We define these values and then repeatedly run the algorithm on the entity collection, stopping after 1000 iterations of the algorithm have been performed

scala> val opt = Comparison.dominance(Min)
opt: cilib.Comparison = cilib.Comparison$$anon$3@3f91e764

scala> val rng = RNG.fromTime // Seed the RNG with the current time of the computer
rng: cilib.RNG = cilib.CMWC@5fef54cf

scala> val result = Runner.repeat(1000, iter, swarm).run(opt)(spherical)
result: cilib.RVar[List[cilib.pso.Particle[cilib.Mem[Double],Double]]] = cilib.RVar$$anon$2@2a67156c

scala> val positions = => Lenses._position.get(x)))
positions: cilib.RVar[List[cilib.Position[Double]]] = cilib.RVar$$anon$2@2083fbda

res3: List[cilib.Position[Double]] = List(Solution(NonEmpty[3.04872811743936E-5,3.2367515030995357E-6,-5.200222372216609E-6,3.15930302309867E-6,1.824478364008228E-6,-4.748900904956893E-5,9.277244803594419E-7,1.099723812252234E-5,1.3660975866124812E-5,-1.7686827786872617E-5,2.319996044799699E-5,-5.570804432924791E-6,-5.514459803812203E-6,-2.9461726020671733E-6,-1.8903166511988658E-5,9.787449127510854E-7,9.113260609281509E-8,1.8445511576621388E-6,2.0048516387934896E-5,-5.303563217143877E-6,-1.6991655400919382E-5,1.2558261176078112E-5,1.8035293795076567E-5,-1.0965152442065831E-5,4.492872912723067E-6,-1.549644540106627E-5,-8.758354889928781E-6,-7.424320575184469E-6,-1.423397298943015E-5,-5.936456637742266E-4],NonEmpty[[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5...