735 research outputs found

    Feedback learning particle swarm optimization

    Get PDF
    This is the author’s version of a work that was accepted for publication in Applied Soft Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published and is available at the link below - Copyright @ Elsevier 2011In this paper, a feedback learning particle swarm optimization algorithm with quadratic inertia weight (FLPSO-QIW) is developed to solve optimization problems. The proposed FLPSO-QIW consists of four steps. Firstly, the inertia weight is calculated by a designed quadratic function instead of conventional linearly decreasing function. Secondly, acceleration coefficients are determined not only by the generation number but also by the search environment described by each particle’s history best fitness information. Thirdly, the feedback fitness information of each particle is used to automatically design the learning probabilities. Fourthly, an elite stochastic learning (ELS) method is used to refine the solution. The FLPSO-QIW has been comprehensively evaluated on 18 unimodal, multimodal and composite benchmark functions with or without rotation. Compared with various state-of-the-art PSO algorithms, the performance of FLPSO-QIW is promising and competitive. The effects of parameter adaptation, parameter sensitivity and proposed mechanism are discussed in detail.This research was partially supported by the National Natural Science Foundation of PR China (Grant No 60874113), the Research Fund for the Doctoral Program of Higher Education (Grant No 200802550007), the Key Creative Project of Shanghai Education Community (Grant No 09ZZ66), the Key Foundation Project of Shanghai(Grant No 09JC1400700), the International Science and Technology Cooperation Project of China under Grant 2009DFA32050, and the Alexander von Humboldt Foundation of Germany

    Force-imitated particle swarm optimization using the near-neighbor effect for locating multiple optima

    Get PDF
    Copyright @ Elsevier Inc. All rights reserved.Multimodal optimization problems pose a great challenge of locating multiple optima simultaneously in the search space to the particle swarm optimization (PSO) community. In this paper, the motion principle of particles in PSO is extended by using the near-neighbor effect in mechanical theory, which is a universal phenomenon in nature and society. In the proposed near-neighbor effect based force-imitated PSO (NN-FPSO) algorithm, each particle explores the promising regions where it resides under the composite forces produced by the “near-neighbor attractor” and “near-neighbor repeller”, which are selected from the set of memorized personal best positions and the current swarm based on the principles of “superior-and-nearer” and “inferior-and-nearer”, respectively. These two forces pull and push a particle to search for the nearby optimum. Hence, particles can simultaneously locate multiple optima quickly and precisely. Experiments are carried out to investigate the performance of NN-FPSO in comparison with a number of state-of-the-art PSO algorithms for locating multiple optima over a series of multimodal benchmark test functions. The experimental results indicate that the proposed NN-FPSO algorithm can efficiently locate multiple optima in multimodal fitness landscapes.This work was supported in part by the Key Program of National Natural Science Foundation (NNSF) of China under Grant 70931001, Grant 70771021, and Grant 70721001, the National Natural Science Foundation (NNSF) of China for Youth under Grant 61004121, Grant 70771021, the Science Fund for Creative Research Group of NNSF of China under Grant 60821063, the PhD Programs Foundation of Ministry of Education of China under Grant 200801450008, and in part by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/1 and Grant EP/E060722/2

    Seeking multiple solutions:an updated survey on niching methods and their applications

    Get PDF
    Multi-Modal Optimization (MMO) aiming to locate multiple optimal (or near-optimal) solutions in a single simulation run has practical relevance to problem solving across many fields. Population-based meta-heuristics have been shown particularly effective in solving MMO problems, if equipped with specificallydesigned diversity-preserving mechanisms, commonly known as niching methods. This paper provides an updated survey on niching methods. The paper first revisits the fundamental concepts about niching and its most representative schemes, then reviews the most recent development of niching methods, including novel and hybrid methods, performance measures, and benchmarks for their assessment. Furthermore, the paper surveys previous attempts at leveraging the capabilities of niching to facilitate various optimization tasks (e.g., multi-objective and dynamic optimization) and machine learning tasks (e.g., clustering, feature selection, and learning ensembles). A list of successful applications of niching methods to real-world problems is presented to demonstrate the capabilities of niching methods in providing solutions that are difficult for other optimization methods to offer. The significant practical value of niching methods is clearly exemplified through these applications. Finally, the paper poses challenges and research questions on niching that are yet to be appropriately addressed. Providing answers to these questions is crucial before we can bring more fruitful benefits of niching to real-world problem solving

    Knowledge Migration Strategies for Optimization of Multi-Population Cultural Algorithm

    Get PDF
    Evolutionary Algorithms (EAs) are meta-heuristic algorithms used for optimization of complex problems. Cultural Algorithm (CA) is one of the EA which incorporates knowledge for optimization. CA with multiple population spaces each incorporating culture and genetic evolution to obtain better solutions are known as Multi-Population Cultural Algorithm (MPCA). MPCA allows to introduce a diversity of knowledge in a dynamic and heterogeneous environment. In an MPCA each population represents a solution space. An individual belonging to a given population could migrate from one population to another for the purpose of introducing new knowledge that influences other individuals in the population. In this thesis, we provide different migration strategies which are inspired from game theory model to improve the quality of solutions. Migration among the different population in MPCA can address the problem of knowledge sharing among population spaces. We have introduced five different migration strategies which are related to the field of economics. The principal idea behind incorporating these strategies is to improve the rate of convergence, increase diversity, better exploration of the search space, to avoid premature convergence and to escape from local optima. Strategies are particularly taken from the economics background as it allows the individual and the population to use their knowledge and make a decision whether to cooperate or to defect with other individuals and populations. We have tested the proposed algorithms against CEC 2015 expensive benchmark problems. These problems are a set of 15 functions which includes varied function categories. Results depict that it leads a to better solution when proposed algorithms used for problems with complex nature and higher dimensions. For 10 dimensional problems the proposed strategies have 7 out 15 better results and for 30 dimensional problems we have 12 out of 15 better results when compared to the existing algorithms

    Transitional Particle Swarm Optimization

    Get PDF
    A new variation of particle swarm optimization (PSO) termed as transitional PSO (T-PSO) is proposed here. T-PSO attempts to improve PSO via its iteration strategy. Traditionally, PSO adopts either the synchronous or the asynchronous iteration strategy. Both of these iteration strategies have their own strengths and weaknesses. The synchronous strategy has reputation of better exploitation while asynchronous strategy is stronger in exploration. The particles of T-PSO start with asynchronous update to encourage more exploration at the start of the search. If no better solution is found for a number of iteration, the iteration strategy is changed to synchronous update to allow fine tuning by the particles. The results show that T-PSO is ranked better than the traditional PSOs

    Multi self-adapting particle swarm optimization algorithm (MSAPSO).

    Get PDF
    The performance and stability of the Particle Swarm Optimization algorithm depends on parameters that are typically tuned manually or adapted based on knowledge from empirical parameter studies. Such parameter selection is ineffectual when faced with a broad range of problem types, which often hinders the adoption of PSO to real world problems. This dissertation develops a dynamic self-optimization approach for the respective parameters (inertia weight, social and cognition). The effects of self-adaption for the optimal balance between superior performance (convergence) and the robustness (divergence) of the algorithm with regard to both simple and complex benchmark functions is investigated. This work creates a swarm variant which is parameter-less, which means that it is virtually independent of the underlying examined problem type. As PSO variants always have the issue, that they can be stuck-in-local-optima, as second main topic the MSAPSO algorithm do have a highly flexible escape-lmin-strategy embedded, which works dimension-less. The MSAPSO algorithm outperforms other PSO variants and also other swarm inspired approaches such as Memetic Firefly algorithm with these two major algorithmic elements (parameter-less approach, dimension-less escape-lmin-strategy). The average performance increase in two dimensions is at least fifteen percent with regard to the compared swarm variants. In higher dimensions (≥ 250) the performance gain accumulates to about fifty percent in average. At the same time the error-proneness of MSAPSO is in average similar or even significant better when converging to the respective global optima’s

    Artificial bee colony algorithm with time-varying strategy

    Get PDF
    Artificial bee colony (ABC) is one of the newest additions to the class of swarm intelligence. ABC algorithm has been shown to be competitive with some other population-based algorithms. However, there is still an insufficiency that ABC is good at exploration but poor at exploitation. To make a proper balance between these two conflictive factors, this paper proposed a novel ABC variant with a time-varying strategy where the ratio between the number of employed bees and the number of onlooker bees varies with time. The linear and nonlinear time-varying strategies can be incorporated into the basic ABC algorithm, yielding ABC-LTVS and ABC-NTVS algorithms, respectively. The effects of the added parameters in the two new ABC algorithms are also studied through solving some representative benchmark functions. The proposed ABC algorithm is a simple and easy modification to the structure of the basic ABC algorithm. Moreover, the proposed approach is general and can be incorporated in other ABC variants. A set of 21 benchmark functions in 30 and 50 dimensions are utilized in the experimental studies. The experimental results show the effectiveness of the proposed time-varying strategy

    Segment-based predominant learning swarm optimizer for large-scale optimization

    Get PDF
    Large-scale optimization has become a significant yet challenging area in evolutionary computation. To solve this problem, this paper proposes a novel segment-based predominant learning swarm optimizer (SPLSO) swarm optimizer through letting several predominant particles guide the learning of a particle. First, a segment-based learning strategy is proposed to randomly divide the whole dimensions into segments. During update, variables in different segments are evolved by learning from different exemplars while the ones in the same segment are evolved by the same exemplar. Second, to accelerate search speed and enhance search diversity, a predominant learning strategy is also proposed, which lets several predominant particles guide the update of a particle with each predominant particle responsible for one segment of dimensions. By combining these two learning strategies together, SPLSO evolves all dimensions simultaneously and possesses competitive exploration and exploitation abilities. Extensive experiments are conducted on two large-scale benchmark function sets to investigate the influence of each algorithmic component and comparisons with several state-of-the-art meta-heuristic algorithms dealing with large-scale problems demonstrate the competitive efficiency and effectiveness of the proposed optimizer. Further the scalability of the optimizer to solve problems with dimensionality up to 2000 is also verified
    corecore