637 research outputs found

    A self-learning particle swarm optimizer for global optimization problems

    Get PDF
    Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2

    Optimization techniques in respiratory control system models

    Get PDF
    One of the most complex physiological systems whose modeling is still an open study is the respiratory control system where different models have been proposed based on the criterion of minimizing the work of breathing (WOB). The aim of this study is twofold: to compare two known models of the respiratory control system which set the breathing pattern based on quantifying the respiratory work; and to assess the influence of using direct-search or evolutionary optimization algorithms on adjustment of model parameters. This study was carried out using experimental data from a group of healthy volunteers under CO2 incremental inhalation, which were used to adjust the model parameters and to evaluate how much the equations of WOB follow a real breathing pattern. This breathing pattern was characterized by the following variables: tidal volume, inspiratory and expiratory time duration and total minute ventilation. Different optimization algorithms were considered to determine the most appropriate model from physiological viewpoint. Algorithms were used for a double optimization: firstly, to minimize the WOB and secondly to adjust model parameters. The performance of optimization algorithms was also evaluated in terms of convergence rate, solution accuracy and precision. Results showed strong differences in the performance of optimization algorithms according to constraints and topological features of the function to be optimized. In breathing pattern optimization, the sequential quadratic programming technique (SQP) showed the best performance and convergence speed when respiratory work was low. In addition, SQP allowed to implement multiple non-linear constraints through mathematical expressions in the easiest way. Regarding parameter adjustment of the model to experimental data, the evolutionary strategy with covariance matrix and adaptation (CMA-ES) provided the best quality solutions with fast convergence and the best accuracy and precision in both models. CMAES reached the best adjustment because of its good performance on noise and multi-peaked fitness functions. Although one of the studied models has been much more commonly used to simulate respiratory response to CO2 inhalation, results showed that an alternative model has a more appropriate cost function to minimize WOB from a physiological viewpoint according to experimental data.Postprint (author's final draft

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO
    corecore