14,650 research outputs found

    Feedback learning particle swarm optimization

    Get PDF
    This is the author’s version of a work that was accepted for publication in Applied Soft Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published and is available at the link below - Copyright @ Elsevier 2011In this paper, a feedback learning particle swarm optimization algorithm with quadratic inertia weight (FLPSO-QIW) is developed to solve optimization problems. The proposed FLPSO-QIW consists of four steps. Firstly, the inertia weight is calculated by a designed quadratic function instead of conventional linearly decreasing function. Secondly, acceleration coefficients are determined not only by the generation number but also by the search environment described by each particle’s history best fitness information. Thirdly, the feedback fitness information of each particle is used to automatically design the learning probabilities. Fourthly, an elite stochastic learning (ELS) method is used to refine the solution. The FLPSO-QIW has been comprehensively evaluated on 18 unimodal, multimodal and composite benchmark functions with or without rotation. Compared with various state-of-the-art PSO algorithms, the performance of FLPSO-QIW is promising and competitive. The effects of parameter adaptation, parameter sensitivity and proposed mechanism are discussed in detail.This research was partially supported by the National Natural Science Foundation of PR China (Grant No 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No 200802550007), the Key Creative Project of Shanghai Education Community (Grant No 09ZZ66), the Key Foundation Project of Shanghai(Grant No 09JC1400700), the International Science and Technology Cooperation Project of China under Grant 2009DFA32050, and the Alexander von Humboldt Foundation of Germany

    Algorithms Inspired by Nature: A Survey

    Full text link
    Nature is known to be the best optimizer. Natural processes most often than not reach an optimal equilibrium. Scientists have always strived to understand and model such processes.Thus, many algorithms exist today that are inspired by nature. Many of these algorithms and heuristics can be used to solve problems for which no polynomial time algorithms exist,such as Job Shop Scheduling and many other Combinatorial Optimization problems. We will discuss some of these algorithms and heuristics and how they help us solve complex problems of practical importance

    Improving PSO Global Method for Feature Selection According to Iterations Global Search and Chaotic Theory

    Full text link
    Making a simple model by choosing a limited number of features with the purpose of reducing the computational complexity of the algorithms involved in classification is one of the main issues in machine learning and data mining. The aim of Feature Selection (FS) is to reduce the number of redundant and irrelevant features and improve the accuracy of classification in a data set. We propose an efficient ISPSO-GLOBAL (Improved Seeding Particle Swarm Optimization GLOBAL) method which investigates the specified iterations to produce prominent features and store them in storage list. The goal is to find informative features based on its iteration frequency with favorable fitness for the next generation and high exploration. Our method exploits of a new initialization strategy in PSO which improves space search and utilizes chaos theory to enhance the population initialization, then we offer a new formula to determine the features size used in proposed method. Our experiments with real-world data sets show that the performance of the ISPSO-GLOBAL is superior comparing with state-of-the-art methods in most of the data sets

    Controller design for synchronization of an array of delayed neural networks using a controllable

    Get PDF
    This is the post-print version of the Article - Copyright @ 2011 ElsevierIn this paper, a controllable probabilistic particle swarm optimization (CPPSO) algorithm is introduced based on Bernoulli stochastic variables and a competitive penalized method. The CPPSO algorithm is proposed to solve optimization problems and is then applied to design the memoryless feedback controller, which is used in the synchronization of an array of delayed neural networks (DNNs). The learning strategies occur in a random way governed by Bernoulli stochastic variables. The expectations of Bernoulli stochastic variables are automatically updated by the search environment. The proposed method not only keeps the diversity of the swarm, but also maintains the rapid convergence of the CPPSO algorithm according to the competitive penalized mechanism. In addition, the convergence rate is improved because the inertia weight of each particle is automatically computed according to the feedback of fitness value. The efficiency of the proposed CPPSO algorithm is demonstrated by comparing it with some well-known PSO algorithms on benchmark test functions with and without rotations. In the end, the proposed CPPSO algorithm is used to design the controller for the synchronization of an array of continuous-time delayed neural networks.This research was partially supported by the National Natural Science Foundation of PR China (Grant No 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No 200802550007), the Key Creative Project of Shanghai Education Community (Grant No 09ZZ66), the Key Foundation Project of Shanghai(Grant No 09JC1400700), the Engineering and Physical Sciences Research Council EPSRC of the U.K. under Grant No. GR/S27658/01, an International Joint Project sponsored by the Royal Society of the U.K., and the Alexander von Humboldt Foundation of Germany

    Particle Swarm Optimization: A survey of historical and recent developments with hybridization perspectives

    Full text link
    Particle Swarm Optimization (PSO) is a metaheuristic global optimization paradigm that has gained prominence in the last two decades due to its ease of application in unsupervised, complex multidimensional problems which cannot be solved using traditional deterministic algorithms. The canonical particle swarm optimizer is based on the flocking behavior and social co-operation of birds and fish schools and draws heavily from the evolutionary behavior of these organisms. This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment and improvements of its most basic as well as some of the state-of-the-art implementations. Concepts and directions on choosing the inertia weight, constriction factor, cognition and social weights and perspectives on convergence, parallelization, elitism, niching and discrete optimization as well as neighborhood topologies are outlined. Hybridization attempts with other evolutionary and swarm paradigms in selected applications are covered and an up-to-date review is put forward for the interested reader.Comment: 34 pages, 7 table

    A new SSO-based Algorithm for the Bi-Objective Time-constrained task Scheduling Problem in Cloud Computing Services

    Full text link
    Cloud computing distributes computing tasks across numerous distributed resources for large-scale calculation. The task scheduling problem is a long-standing problem in cloud-computing services with the purpose of determining the quality, availability, reliability, and ability of the cloud computing. This paper is an extension and a correction to our previous conference paper entitled Multi Objective Scheduling in Cloud Computing Using MOSSO published in 2018 IEEE Congress on Evolutionary Computation. More new algorithms, testing, and comparisons have been implemented to solve the bi-objective time-constrained task scheduling problem in a more efficient manner. Furthermore, this paper developed a new SSO-based algorithm called the bi-objective simplified swarm optimization to fix the error in previous SSO-based algorithm to address the task-scheduling problem. From the results obtained from the new experiments conducted, the proposed BSSO outperforms existing famous algorithms, e.g., NSGA-II, MOPSO, and MOSSO in the convergence, diversity, number of obtained temporary nondominated solutions, and the number of obtained real nondominated solutions. The results propound that the proposed BSSO can successfully achieve the aim of this work

    Variable neighbourhood search for the minimum labelling Steiner tree problem

    Get PDF
    We present a study on heuristic solution approaches to the minimum labelling Steiner tree problem, an NP-hard graph problem related to the minimum labelling spanning tree problem. Given an undirected labelled connected graph, the aim is to find a spanning tree covering a given subset of nodes of the graph, whose edges have the smallest number of distinct labels. Such a model may be used to represent many real world problems in telecommunications and multimodal transportation networks. Several metaheuristics are proposed and evaluated. The approaches are compared to the widely adopted Pilot Method and it is shown that the Variable Neighbourhood Search metaheuristic is the most effective approach to the problem, obtaining high quality solutions in short computational running times

    Comparison of Evolutionary Optimization Algorithms for FM-TV Broadcasting Antenna Array Null Filling

    Get PDF
    Broadcasting antenna array null filling is a very challenging problem for antenna design optimization. This paper compares five antenna design optimization algorithms (Differential Evolution, Particle Swarm, Taguchi, Invasive Weed, Adaptive Invasive Weed) as solutions to the antenna array null filling problem. The algorithms compared are evolutionary algorithms which use mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. The focus of the comparison is given to the algorithm with the best results, nevertheless, it becomes obvious that the algorithm which produces the best fitness (Invasive Weed Optimization) requires very substantial computational resources due to its random search nature

    Enhanced Estimation of Autoregressive Wind Power Prediction Model Using Constriction Factor Particle Swarm Optimization

    Full text link
    Accurate forecasting is important for cost-effective and efficient monitoring and control of the renewable energy based power generation. Wind based power is one of the most difficult energy to predict accurately, due to the widely varying and unpredictable nature of wind energy. Although Autoregressive (AR) techniques have been widely used to create wind power models, they have shown limited accuracy in forecasting, as well as difficulty in determining the correct parameters for an optimized AR model. In this paper, Constriction Factor Particle Swarm Optimization (CF-PSO) is employed to optimally determine the parameters of an Autoregressive (AR) model for accurate prediction of the wind power output behaviour. Appropriate lag order of the proposed model is selected based on Akaike information criterion. The performance of the proposed PSO based AR model is compared with four well-established approaches; Forward-backward approach, Geometric lattice approach, Least-squares approach and Yule-Walker approach, that are widely used for error minimization of the AR model. To validate the proposed approach, real-life wind power data of \textit{Capital Wind Farm} was obtained from Australian Energy Market Operator. Experimental evaluation based on a number of different datasets demonstrate that the performance of the AR model is significantly improved compared with benchmark methods.Comment: The 9th IEEE Conference on Industrial Electronics and Applications (ICIEA) 201

    LibOPT: An Open-Source Platform for Fast Prototyping Soft Optimization Techniques

    Full text link
    Optimization techniques play an important role in several scientific and real-world applications, thus becoming of great interest for the community. As a consequence, a number of open-source libraries are available in the literature, which ends up fostering the research and development of new techniques and applications. In this work, we present a new library for the implementation and fast prototyping of nature-inspired techniques called LibOPT. Currently, the library implements 15 techniques and 112 benchmarking functions, as well as it also supports 11 hypercomplex-based optimization approaches, which makes it one of the first of its kind. We showed how one can easily use and also implement new techniques in LibOPT under the C paradigm. Examples are provided with samples of source-code using benchmarking functions
    corecore