403 research outputs found

    A Survey on Adaptation Strategies for Mutation and Crossover Rates of Differential Evolution Algorithm

    Get PDF
    Differential Evolution (DE), the well-known optimization algorithm, is a tool under the roof of Evolutionary Algorithms (EAs) for solving non-linear and non-differential optimization problems. DE has many qualities in its hand, which are attributing to its popularity. DE also is known for its simplicity in solving the given problem with few control parameters: the population size (NP), the mutation rate (F) and the crossover rate (Cr). To avoid the difficulty involved in setting of suitable values for NP, F and Cr many parameter adaptation strategies are proposed in the literature. This paper is to present the working principle of the parameter adaptation strategies of F and Cr. The adaptation strategies are categorized based on the logic used by the authors, and clear insights about all the categories are presented

    Large-Scale Evolutionary Optimization Using Multi-Layer Strategy Differential Evolution

    Get PDF
    Differential evolution (DE) has been extensively used in optimization studies since its development in 1995 because of its reputation as an effective global optimizer. DE is a population-based meta-heuristic technique that develops numerical vectors to solve optimization problems. DE strategies have a significant impact on DE performance and play a vital role in achieving stochastic global optimization. However, DE is highly dependent on the control parameters involved. In practice, the fine-tuning of these parameters is not always easy. Here, we discuss the improvements and developments that have been made to DE algorithms. The Multi-Layer Strategies Differential Evolution (MLSDE) algorithm, which finds optimal solutions for large scale problems. To solve large scale problems were grouped different strategies together and applied them to date set. Furthermore, these strategies were applied to selected vectors to strengthen the exploration ability of the algorithm. Extensive computational analysis was also carried out to evaluate the performance of the proposed algorithm on a set of well-known CEC 2015 benchmark functions. This benchmark was utilized for the assessment and performance evaluation of the proposed algorithm

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO

    Evolutionary framework with reinforcement learning-based mutation adaptation

    Get PDF
    Although several multi-operator and multi-method approaches for solving optimization problems have been proposed, their performances are not consistent for a wide range of optimization problems. Also, the task of ensuring the appropriate selection of algorithms and operators may be inefficient since their designs are undertaken mainly through trial and error. This research proposes an improved optimization framework that uses the benefits of multiple algorithms, namely, a multi-operator differential evolution algorithm and a co-variance matrix adaptation evolution strategy. In the former, reinforcement learning is used to automatically choose the best differential evolution operator. To judge the performance of the proposed framework, three benchmark sets of bound-constrained optimization problems (73 problems) with 10, 30 and 50 dimensions are solved. Further, the proposed algorithm has been tested by solving optimization problems with 100 dimensions taken from CEC2014 and CEC2017 benchmark problems. A real-world application data set has also been solved. Several experiments are designed to analyze the effects of different components of the proposed framework, with the best variant compared with a number of state-of-the-art algorithms. The experimental results show that the proposed algorithm is able to outperform all the others considered.</p

    Population-based algorithms for improved history matching and uncertainty quantification of Petroleum reservoirs

    Get PDF
    In modern field management practices, there are two important steps that shed light on a multimillion dollar investment. The first step is history matching where the simulation model is calibrated to reproduce the historical observations from the field. In this inverse problem, different geological and petrophysical properties may provide equally good history matches. Such diverse models are likely to show different production behaviors in future. This ties the history matching with the second step, uncertainty quantification of predictions. Multiple history matched models are essential for a realistic uncertainty estimate of the future field behavior. These two steps facilitate decision making and have a direct impact on technical and financial performance of oil and gas companies. Population-based optimization algorithms have been recently enjoyed growing popularity for solving engineering problems. Population-based systems work with a group of individuals that cooperate and communicate to accomplish a task that is normally beyond the capabilities of each individual. These individuals are deployed with the aim to solve the problem with maximum efficiency. This thesis introduces the application of two novel population-based algorithms for history matching and uncertainty quantification of petroleum reservoir models. Ant colony optimization and differential evolution algorithms are used to search the space of parameters to find multiple history matched models and, using a Bayesian framework, the posterior probability of the models are evaluated for prediction of reservoir performance. It is demonstrated that by bringing latest developments in computer science such as ant colony, differential evolution and multiobjective optimization, we can improve the history matching and uncertainty quantification frameworks. This thesis provides insights into performance of these algorithms in history matching and prediction and develops an understanding of their tuning parameters. The research also brings a comparative study of these methods with a benchmark technique called Neighbourhood Algorithms. This comparison reveals the superiority of the proposed methodologies in various areas such as computational efficiency and match quality

    Accelerating ant colony optimization by using local search

    Get PDF
    This thesis report is submitted in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering, 2015.Cataloged from PDF version of thesis report.Includes bibliographical references (page 42-45).Optimization is very important fact in terms of taking decision in mathematics, statistics, computer science and real life problem solving or decision making application. Many different optimization techniques have been developed for solving such functional problem. In order to solving various problem computer Science introduce evolutionary optimization algorithm and their hybrid. In recent years, test functions are using to validate new optimization algorithms and to compare the performance with other existing algorithm. There are many Single Object Optimization algorithm proposed earlier. For example: ACO, PSO, ABC. ACO is a popular optimization technique for solving hard combination mathematical optimization problem. In this paper, we run ACO upon five benchmark function and modified the parameter of ACO in order to perform SBX crossover and polynomial mutation. The proposed algorithm SBXACO is tested upon some benchmark function under both static and dynamic to evaluate performances. We choose wide range of benchmark function and compare results with existing DE and its hybrid DEahcSPX from other literature are also presented here.Nabila TabassumMaruful HaqueB. Computer Science and Engineerin
    corecore