1,765 research outputs found

    Process Knowledge-guided Autonomous Evolutionary Optimization for Constrained Multiobjective Problems

    Get PDF
    Various real-world problems can be attributed to constrained multi-objective optimization problems. Although there are various solution methods, it is still very challenging to automatically select efficient solving strategies for constrained multi-objective optimization problems. Given this, a process knowledge-guided constrained multi-objective autonomous evolutionary optimization method is proposed. Firstly, the effects of different solving strategies on population states are evaluated in the early evolutionary stage. Then, the mapping model of population states and solving strategies is established. Finally, the model recommends subsequent solving strategies based on the current population state. This method can be embedded into existing evolutionary algorithms, which can improve their performances to different degrees. The proposed method is applied to 41 benchmarks and 30 dispatch optimization problems of the integrated coal mine energy system. Experimental results verify the effectiveness and superiority of the proposed method in solving constrained multi-objective optimization problems.The National Key R&D Program of China, the National Natural Science Foundation of China, Shandong Provincial Natural Science Foundation, Fundamental Research Funds for the Central Universities and the Open Research Project of The Hubei Key Laboratory of Intelligent Geo-Information Processing.http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=4235hj2023Electrical, Electronic and Computer Engineerin

    Adaptive Double Chain Quantum Genetic Algorithm for Constrained Optimization Problems

    Get PDF
    Optimization problems are often highly constrained and evolutionary algorithms (EAs) are effective methods to tackle this kind of problems. To further improve search efficiency and convergence rate of EAs, this paper presents an adaptive double chain quantum genetic algorithm (ADCQGA) for solving constrained optimization problems. ADCQGA makes use of double-individuals to represent solutions that are classified as feasible and infeasible solutions. Fitness (or evaluation) functions are defined for both types of solutions. Based on the fitness function, three types of step evolution (SE) are defined and utilized for judging evolutionary individuals. An adaptive rotation is proposed and used to facilitate updating individuals in different solutions. To further improve the search capability and convergence rate, ADCQGA utilizes an adaptive evolution process (AEP), adaptive mutation and replacement techniques. ADCQGA was first tested on a widely used benchmark function to illustrate the relationship between initial parameter values and the convergence rate/search capability. Then the proposed ADCQGA is successfully applied to solve other twelve benchmark functions and five well-known constrained engineering design problems. Multi-aircraft cooperative target allocation problem is a typical constrained optimization problem and requires efficient methods to tackle. Finally, ADCQGA is successfully applied to solving the target allocation problem

    Differential evolution with an evolution path: a DEEP evolutionary algorithm

    Get PDF
    Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs

    Evolutionary framework with reinforcement learning-based mutation adaptation

    Get PDF
    Although several multi-operator and multi-method approaches for solving optimization problems have been proposed, their performances are not consistent for a wide range of optimization problems. Also, the task of ensuring the appropriate selection of algorithms and operators may be inefficient since their designs are undertaken mainly through trial and error. This research proposes an improved optimization framework that uses the benefits of multiple algorithms, namely, a multi-operator differential evolution algorithm and a co-variance matrix adaptation evolution strategy. In the former, reinforcement learning is used to automatically choose the best differential evolution operator. To judge the performance of the proposed framework, three benchmark sets of bound-constrained optimization problems (73 problems) with 10, 30 and 50 dimensions are solved. Further, the proposed algorithm has been tested by solving optimization problems with 100 dimensions taken from CEC2014 and CEC2017 benchmark problems. A real-world application data set has also been solved. Several experiments are designed to analyze the effects of different components of the proposed framework, with the best variant compared with a number of state-of-the-art algorithms. The experimental results show that the proposed algorithm is able to outperform all the others considered.</p

    Ranking-Based Differential Evolution for Large-Scale Continuous Optimization

    Get PDF
    Large-scale continuous optimization has gained considerable attention in recent years. Differential evolution (DE) is a simple yet efficient global numerical optimization algorithm, which has been successfully used in diverse fields. Generally, the vectors in the DE mutation operators are chosen randomly from the population. In this paper, we employ the ranking-based mutation operators for the DE algorithm to improve DE's performance. In the ranking-based mutation operators, the vectors are selected according to their rankings in the current population. The ranking-based mutation operators are general, and they are integrated into the original DE algorithm, GODE, and GaDE to verify the enhanced performance. Experiments have been conducted on the large-scale continuous optimization problems. The results indicate that the ranking-based mutation operators are able to enhance the overall performance of DE, GODE, and GaDE in the large-scale continuous optimization problems
    corecore