6,278 research outputs found

    Proposed shunt rounding technique for large-scale security constrained loss minimization

    Get PDF
    The official published version can be obtained from the link below - Copyright @ 2010 IEEE.Optimal reactive power flow applications often model large numbers of discrete shunt devices as continuous variables, which are rounded to their nearest discrete value at the final iteration. This can degrade optimality. This paper presents novel methods based on probabilistic and adaptive threshold approaches that can extend existing security constrained optimal reactive power flow methods to effectively solve large-scale network problems involving discrete shunt devices. Loss reduction solutions from the proposed techniques were compared to solutions from the mixed integer nonlinear mathematical programming algorithm (MINLP) using modified IEEE standard networks up to 118 buses. The proposed techniques were also applied to practical large-scale network models of Great Britain. The results show that the proposed techniques can achieve improved loss minimization solutions when compared to the standard rounding method.This work was supported in part by the National Grid and in part by the EPSRC. Paper no. TPWRS-00653-2009

    Ranking-Based Differential Evolution for Large-Scale Continuous Optimization

    Get PDF
    Large-scale continuous optimization has gained considerable attention in recent years. Differential evolution (DE) is a simple yet efficient global numerical optimization algorithm, which has been successfully used in diverse fields. Generally, the vectors in the DE mutation operators are chosen randomly from the population. In this paper, we employ the ranking-based mutation operators for the DE algorithm to improve DE's performance. In the ranking-based mutation operators, the vectors are selected according to their rankings in the current population. The ranking-based mutation operators are general, and they are integrated into the original DE algorithm, GODE, and GaDE to verify the enhanced performance. Experiments have been conducted on the large-scale continuous optimization problems. The results indicate that the ranking-based mutation operators are able to enhance the overall performance of DE, GODE, and GaDE in the large-scale continuous optimization problems

    An Improved Differential Evolution Algorithm for Numerical Optimization Problems

    Get PDF
    The differential evolution algorithm has gained popularity for solving complex optimization problems because of its simplicity and efficiency. However, it has several drawbacks, such as a slow convergence rate, high sensitivity to the values of control parameters, and the ease of getting trapped in local optima. In order to overcome these drawbacks, this paper integrates three novel strategies into the original differential evolution. First, a population improvement strategy based on a multi-level sampling mechanism is used to accelerate convergence and increase the diversity of the population. Second, a new self-adaptive mutation strategy balances the exploration and exploitation abilities of the algorithm by dynamically determining an appropriate value of the mutation parameters; this improves the search ability and helps the algorithm escape from local optima when it gets stuck. Third, a new selection strategy guides the search to avoid local optima. Twelve benchmark functions of different characteristics are used to validate the performance of the proposed algorithm. The experimental results show that the proposed algorithm performs significantly better than the original DE in terms of the ability to locate the global optimum, convergence speed, and scalability. In addition, the proposed algorithm is able to find the global optimal solutions on 8 out of 12 benchmark functions, while 7 other well-established metaheuristic algorithms, namely NBOLDE, ODE, DE, SaDE, JADE, PSO, and GA, can obtain only 6, 2, 1, 1, 1, 1, and 1 functions, respectively. Doi: 10.28991/HIJ-2023-04-02-014 Full Text: PD

    EvoX: A Distributed GPU-accelerated Library towards Scalable Evolutionary Computation

    Full text link
    During the past decades, evolutionary computation (EC) has demonstrated promising potential in solving various complex optimization problems of relatively small scales. Nowadays, however, ongoing developments in modern science and engineering are bringing increasingly grave challenges to the conventional EC paradigm in terms of scalability. As problem scales increase, on the one hand, the encoding spaces (i.e., dimensions of the decision vectors) are intrinsically larger; on the other hand, EC algorithms often require growing numbers of function evaluations (and probably larger population sizes as well) to work properly. To meet such emerging challenges, not only does it require delicate algorithm designs, but more importantly, a high-performance computing framework is indispensable. Hence, we develop a distributed GPU-accelerated algorithm library -- EvoX. First, we propose a generalized workflow for implementing general EC algorithms. Second, we design a scalable computing framework for running EC algorithms on distributed GPU devices. Third, we provide user-friendly interfaces to both researchers and practitioners for benchmark studies as well as extended real-world applications. To comprehensively assess the performance of EvoX, we conduct a series of experiments, including: (i) scalability test via numerical optimization benchmarks with problem dimensions/population sizes up to millions; (ii) acceleration test via a neuroevolution task with multiple GPU nodes; (iii) extensibility demonstration via the application to reinforcement learning tasks on the OpenAI Gym. The code of EvoX is available at https://github.com/EMI-Group/EvoX

    Genetic learning particle swarm optimization

    Get PDF
    Social learning in particle swarm optimization (PSO) helps collective efficiency, whereas individual reproduction in genetic algorithm (GA) facilitates global effectiveness. This observation recently leads to hybridizing PSO with GA for performance enhancement. However, existing work uses a mechanistic parallel superposition and research has shown that construction of superior exemplars in PSO is more effective. Hence, this paper first develops a new framework so as to organically hybridize PSO with another optimization technique for “learning.” This leads to a generalized “learning PSO” paradigm, the *L-PSO. The paradigm is composed of two cascading layers, the first for exemplar generation and the second for particle updates as per a normal PSO algorithm. Using genetic evolution to breed promising exemplars for PSO, a specific novel *L-PSO algorithm is proposed in the paper, termed genetic learning PSO (GL-PSO). In particular, genetic operators are used to generate exemplars from which particles learn and, in turn, historical search information of particles provides guidance to the evolution of the exemplars. By performing crossover, mutation, and selection on the historical information of particles, the constructed exemplars are not only well diversified, but also high qualified. Under such guidance, the global search ability and search efficiency of PSO are both enhanced. The proposed GL-PSO is tested on 42 benchmark functions widely adopted in the literature. Experimental results verify the effectiveness, efficiency, robustness, and scalability of the GL-PSO
    • …
    corecore