# Global Best PSO (GBestPSO)

The `GBestPSO`

is the canonical version of the PSO. It is popular, not only, because it is the original version of the algorithm (which is cited often within literature), but is also a simple algorithm to implement.

As with all algorithms modelled as a function, the type of the `GBestPSO`

is simply defined as:

```
List[Particle[S,A]] => RVar[List[Particle[S,A]]]
```

where a collection of entities are transformed from a given set of entities to a new collection of entities, with randomness applied. This process is then repeatedly reapplied, until a stopping condition is reached.

We’re going to exclude the import statements simply for brevity, but the reader is encouraged to examine the example algorithm definition in the `examples`

sub-module of the project source.

## Getting things ready

In order to define an experiment, there are a couple of things we need to get ready first. The most obvious should be that there needs to be some kind of problem, upon which we will be executing the `GBestPSO`

.

As the very first step, we need to get the needed imports in scope:

```
import cilib._
import cilib.pso._
import eu.timepit.refined.auto._
import scalaz.effect._
import scalaz.effect.IO.putStrLn
import spire.implicits._
import spire.math.Interval
import cilib.syntax.algorithm._
import scalaz._
import Scalaz._
```

Next, we define the `GBestPSO`

itself. The `GBestPSO`

is defined to use a velocity update equation that uses the personal best of the current particle and then the collection’s current best particle to determine the new velocity vector for the current particle within the algorithm.

Let’s define the two “particle attractors” which we need in the velocity update equation. Because these two values will attract or guide the particle in the search space, we refer to them as `Guide`

instances:

```
scala> val cognitive = Guide.pbest[Mem[Double],Double]
cognitive: cilib.pso.Guide[cilib.Mem[Double],Double] = cilib.pso.Guide$$$Lambda$9959/1717398686@27226d20
scala> val social = Guide.gbest[Mem[Double]]
social: cilib.pso.Guide[cilib.Mem[Double],Double] = cilib.pso.Guide$$$Lambda$9961/1164569932@4450c45a
```

Again, we need to provide some type parameters to keep the compiler happy, but in this case we need to provide a type called `Mem[Double]`

, which is needed to track the memory of a particle and at the same time, fulfills the function constraints of the PSO algorithm itself: that the algorithm participants must cater for a `HasMemory`

instance which exists for the `Mem[Double]`

type.

Now we can define the algorithm itself, providing some constants that are known to provide convergent behaviour within the PSO:

```
scala> val gbestPSO = pso.Defaults.gbest(0.729844, 1.496180, 1.496180, cognitive, social)
gbestPSO: scalaz.NonEmptyList[cilib.pso.Particle[cilib.Mem[Double],Double]] => (cilib.pso.Particle[cilib.Mem[Double],Double] => cilib.Step[Double,cilib.pso.Particle[cilib.Mem[Double],Double]]) = cilib.pso.Defaults$$$Lambda$9962/1671090718@2767560b
scala> val iter = Iteration.sync(gbestPSO)
iter: scalaz.Kleisli[[β$0$]cilib.Step[Double,β$0$],scalaz.NonEmptyList[cilib.pso.Particle[cilib.Mem[Double],Double]],scalaz.NonEmptyList[cilib.pso.Particle[cilib.Mem[Double],Double]]] = Kleisli(cilib.Iteration$$$Lambda$9963/1706879187@1c99d080)
```

Now that the algorithm is defined, we need to define an “environment” within which this algorithm will execute. The environment is simply a collection of vaues that defines the comparison and evaluator for the algorithm, such as minimizing a benchmark problem.

Let’s define such an environment using a simple problem, borrowing the problem definition from the benchmarks sister project. We will also be minimizing this problem and defining the bounds of the problem space.

```
scala> val env =
| Environment(
| cmp = Comparison.dominance(Min),
| eval = Eval.unconstrained(cilib.benchmarks.Benchmarks.spherical[NonEmptyList, Double]).eval,
| bounds = Interval(-5.12,5.12)^30)
env: cilib.Environment[Double] = Environment(cilib.Comparison$$anon$3@630be640,cilib.RVar$$anon$3@196c6bb1,NonEmpty[[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12]])
```

Here we define a the evaluator, which is an unconstrained `Eval`

instance, which uses the `spherical`

function definiton from the benchmarks project. We explicitly provide the needed type parameters to keep the compiler happy, that being that the `Position`

is a `NonEmtpyList[Double]`

. Additionally, the `cmp`

value defines *how* the optimization will be driven, which is to minimize the evaluator in this example.

Let’s now define the entity collection that we need to given the algorithm instance. The collection requires the problem bounds and also defines how the entity instances will be initialized, once random positions are generated for the given problem space

```
scala> val swarm = Position.createCollection(PSO.createParticle(x => Entity(Mem(x, x.zeroed), x)))(env.bounds, 20)
swarm: cilib.RVar[scalaz.NonEmptyList[cilib.pso.Particle[cilib.Mem[Double],Double]]] = cilib.RVar$$anon$2@231ecc75
```

The last requirement is to provide the RNG instance that will use used within the algorithm. We define this value and then repeatedly run the algorithm on the entity collection, stopping after 1000 iterations of the algorithm have been performed

```
scala> val rng = RNG.fromTime // Seed the RNG with the current time of the computer
rng: cilib.RNG = cilib.CMWC@14640625
scala> val result = Runner.repeat(1000, iter, swarm).run(env)
result: cilib.RVar[scalaz.NonEmptyList[cilib.pso.Particle[cilib.Mem[Double],Double]]] = cilib.RVar$$anon$2@780df7cd
scala> val positions = result.map(_.map(x => Lenses._position.get(x)))
positions: cilib.RVar[scalaz.NonEmptyList[cilib.Position[Double]]] = cilib.RVar$$anon$2@744b1c42
scala> positions.run(rng)._2
res4: scalaz.NonEmptyList[cilib.Position[Double]] = NonEmpty[Solution(NonEmpty[-2.9048798834916276E-6,-1.882505308908995E-5,-3.900962757527723E-7,-2.432528152692061E-7,-1.0129363018454894E-6,7.274249173909196E-6,-2.5202627240450576E-6,1.6839614717355645E-6,4.3253296665864507E-7,-8.066280912267861E-7,2.5758948281671857E-6,2.0960919836368514E-6,5.124707433043215E-6,-1.472147889097426E-6,-1.6534647187108953E-6,1.0282096835985707E-6,-7.1180319562336055E-6,7.815127386282414E-8,-2.388483873090207E-6,3.730946612784019E-6,-1.402950107473946E-6,1.3098597697132314E-6,-2.1468459069211574E-6,-4.750084923076191E-6,2.3448518224502303E-6,-8.247236047096162E-7,-1.1158131890030593E-7,-1.292638266391916E-6,4.133395086834839E-7,-3.7016026488499885E-7],NonEmpty[[-5.12, 5.12],[-5.12, 5.12],[-5.12, 5.12],[-5...
```