23 research outputs found

    A memetic particle swarm optimisation algorithm for dynamic multi-modal optimisation problems

    Get PDF
    Copyright @ 2011 Taylor & Francis.Many real-world optimisation problems are both dynamic and multi-modal, which require an optimisation algorithm not only to find as many optima under a specific environment as possible, but also to track their moving trajectory over dynamic environments. To address this requirement, this article investigates a memetic computing approach based on particle swarm optimisation for dynamic multi-modal optimisation problems (DMMOPs). Within the framework of the proposed algorithm, a new speciation method is employed to locate and track multiple peaks and an adaptive local search method is also hybridised to accelerate the exploitation of species generated by the speciation method. In addition, a memory-based re-initialisation scheme is introduced into the proposed algorithm in order to further enhance its performance in dynamic multi-modal environments. Based on the moving peaks benchmark problems, experiments are carried out to investigate the performance of the proposed algorithm in comparison with several state-of-the-art algorithms taken from the literature. The experimental results show the efficiency of the proposed algorithm for DMMOPs.This work was supported by the Key Program of National Natural Science Foundation (NNSF) of China under Grant no. 70931001, the Funds for Creative Research Groups of China under Grant no. 71021061, the National Natural Science Foundation (NNSF) of China under Grant 71001018, Grant no. 61004121 and Grant no. 70801012 and the Fundamental Research Funds for the Central Universities Grant no. N090404020, the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant no. EP/E060722/01 and Grant EP/E060722/02, and the Hong Kong Polytechnic University under Grant G-YH60

    Adaptive particle swarm optimization

    Get PDF
    An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity

    Density as the Segregation Mechanism in Fish School Search for Multimodal Optimization Problems

    Get PDF
    Abstract. Methods to deal with Multimodal Optimization Problems (MMOP) can be classified in three main approaches, regarding the number and the type of desired solutions. In general, methods can be applied to find: (1) only one global solution; (2) all global solutions; and (3) all local solutions of a given MMOP. The simultaneous capture of several solutions of MMOPs without parameter adjustment is still an open question in optimization problems. In this article, we discuss a density segregation mechanism for Fish School Search to enable simultaneous capture of multiple optimal solutions of MMOPs with one single parameter. The new proposal is based on vanilla version of Fish School Search (FSS) algorithm, which is inspired on actual fish school behavior. The performance of the new algorithm is evaluated and compared to the performance of other methods such as NichePSO and Glowworm Swarm Optimization (GSO) for seven well-known benchmark functions of two dimensions. According to the obtained results, presented in this article, the new approach outperforms the algorithms NichePSO and GSO for all benchmark functions

    Orthogonal learning particle swarm optimization

    Get PDF
    Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood’s best experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSOL algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness

    A novel intelligent fault diagnosis method of rotating machinery based on deep learning and PSO-SVM

    Get PDF
    A novel intelligent fault diagnosis method based on deep learning and particle swarm optimization support vectors machine (PSO-SVM) is proposed. The method uses deep learning neural network (DNN) to extract fault features automatically, and then uses support vector machine to classify diagnose faults based on extracted features. DNN consists of a stack of denoising autoencoders. Through pre-training and fine-tuning of DNN, features of input parameters can be extracted automatically. This paper uses particle swarm optimization algorithm to select the best parameters for SVM. The extracted features from multiple hidden layers of DNN are used as the input of PSO-SVM. Experimental data is derived from the data of rolling bearing test platform of West University. The results demonstrate that deep learning can automatically extract fault feature, which removes the need for manual feature selection, various signal processing technologies and diagnosis experience, and improves the efficiency of fault feature extraction. Under the condition of small sample size, combining the features of the multiple hidden layers as the input into the PSO-SVM can significantly increase the accuracy of fault diagnosis

    A Conjunction Method of Wavelet Transform-Particle Swarm Optimization-Support Vector Machine for Streamflow Forecasting

    Get PDF
    Streamflow forecasting has an important role in water resource management and reservoir operation. Support vector machine (SVM) is an appropriate and suitable method for streamflow prediction due to its best versatility, robustness, and effectiveness. In this study, a wavelet transform particle swarm optimization support vector machine (WT-PSO-SVM) model is proposed and applied for streamflow time series prediction. Firstly, the streamflow time series were decomposed into various details (Ds) and an approximation (A3) at three resolution levels (21-22-23) using Daubechies (db3) discrete wavelet. Correlation coefficients between each D subtime series and original monthly streamflow time series are calculated. Ds components with high correlation coefficients (D3) are added to the approximation (A3) as the input values of the SVM model. Secondly, the PSO is employed to select the optimal parameters, C, ε, and σ, of the SVM model. Finally, the WT-PSO-SVM models are trained and tested by the monthly streamflow time series of Tangnaihai Station located in Yellow River upper stream from January 1956 to December 2008. The test results indicate that the WT-PSO-SVM approach provide a superior alternative to the single SVM model for forecasting monthly streamflow in situations without formulating models for internal structure of the watershed

    An analysis of the inertia weight parameter for binary particle swarm optimization

    Get PDF
    In particle swarm optimization, the inertia weight is an important parameter for controlling its search capability. There have been intensive studies of the inertia weight in continuous optimization, but little attention has been paid to the binary case. This study comprehensively investigates the effect of the inertia weight on the performance of binary particle swarm optimization, from both theoretical and empirical perspectives. A mathematical model is proposed to analyze the behavior of binary particle swarm optimization, based on which several lemmas and theorems on the effect of the inertia weight are derived. Our research findings suggest that in the binary case, a smaller inertia weight enhances the exploration capability while a larger inertia weight encourages exploitation. Consequently, this paper proposes a new adaptive inertia weight scheme for binary particle swarm optimization. This scheme allows the search process to start first with exploration and gradually move towards exploitation by linearly increasing the inertia weight. The experimental results on 0/1 knapsack problems show that the binary particle swarm optimization with the new increasing inertia weight scheme performs significantly better than that with the conventional decreasing and constant inertia weight schemes. This study verifies the efficacy of increasing inertia weight in binary particle swarm optimization

    Bees algorithm for multimodal function optimisation

    Get PDF
    The aim of multimodal optimisation is to find significant optima of a multimodal objective function including its global optimum. Many real-world applications are multimodal optimisation problems requiring multiple optimal solutions. The Bees Algorithm is a global optimisation procedure inspired by the foraging behaviour of honeybees. In this paper, several procedures are introduced to enhance the algorithm’s capability to find multiple optima in multimodal optimisation problems. In the proposed Bees Algorithm for multimodal optimisation, dynamic colony size is permitted to automatically adapt the search effort to different objective functions. A local search approach called balanced search technique is also proposed to speed up the algorithm. In addition, two procedures of radius estimation and optima elitism are added, to respectively enhance the Bees Algorithm’s ability to locate unevenly distributed optima, and eliminate insignificant local optima. The performance of the modified Bees Algorithm is evaluated on well-known benchmark problems, and the results are compared with those obtained by several other state-of-the-art algorithms. The results indicate that the proposed algorithm inherits excellent properties from the standard Bees Algorithm, obtaining notable efficiency for solving multimodal optimisation problems due to the introduced modifications. </jats:p
    corecore