Particle swarm optimization (PSO) relies on its\ud learning strategy to guide its search direction. Traditionally,\ud each particle utilizes its historical best experience and its neighborhood’s\ud best experience through linear summation. Such a\ud learning strategy is easy to use, but is inefficient when searching\ud in complex problem spaces. Hence, designing learning strategies\ud that can utilize previous search information (experience) more\ud efficiently has become one of the most salient and active PSO\ud research topics. In this paper, we proposes an orthogonal learning\ud (OL) strategy for PSO to discover more useful information that\ud lies in the above two experiences via orthogonal experimental\ud design. We name this PSO as orthogonal learning particle swarm\ud optimization (OLPSO). The OL strategy can guide particles to\ud fly in better directions by constructing a much promising and\ud efficient exemplar. The OL strategy can be applied to PSO with\ud any topological structure. In this paper, it is applied to both global\ud and local versions of PSO, yielding the OLPSO-G and OLPSOL\ud algorithms, respectively. This new learning strategy and the\ud new algorithms are tested on a set of 16 benchmark functions, and\ud are compared with other PSO algorithms and some state of the\ud art evolutionary algorithms. The experimental results illustrate\ud the effectiveness and efficiency of the proposed learning strategy\ud and algorithms. The comparisons show that OLPSO significantly\ud improves the performance of PSO, offering faster global convergence,\ud higher solution quality, and stronger robustness
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.