1 research outputs found

    Restarting Particle Swarm Optimisation for deceptive problems

    No full text
    Particle Swarm Optimisation (PSO) has the advantage of finding, if not the optimum of a continuous problem space, at least a very good position and doing this with modest computational cost. However, as the number of possible optima increases, PSO will only explore a subset of these positions. Techniques such as niching can allow a small number of positions to be explored in parallel but by the time that a problem has become truly deceptive, has many many optima, there is little choice but to explore optima sequentially. PSO, once having converged, has no way of dispersing its particle so as to allow a further convergence, hopefully to a new optima. Random restarts are one way of providing this divergence, this paper suggest another inspired by Extremal Optimisation (EO). The technique proposed allows the particles to disperse by way of positions that are fitter than average. After a while dispersion ceases and PSO takes over again, but since it starts from better than average fitnesses the point it converges to is also better than average. This alternation of algorithms can carry on indefinitely. This paper examines the performance of sequential PSO exploration on a range of problems, some deceptive, some non-deceptive. As predicted, performance on deceptive problems tends to improve significantly with time while performance on non-deceptive problems, which do not have multiple positions with comparable fitness to spread through, does not
    corecore