2,163 research outputs found

    Adaptive dynamic disturbance strategy for differential evolution algorithm

    Get PDF
    To overcome the problems of slow convergence speed, premature convergence leading to local optimization and parameter constraints when solving high-dimensional multi-modal optimization problems, an adaptive dynamic disturbance strategy for differential evolution algorithm (ADDSDE) is proposed. Firstly, this entails using the chaos mapping strategy to initialize the population to increase population diversity, and secondly, a new weighted mutation operator is designed to weigh and combinemutation strategies of the standard differential evolution (DE). The scaling factor and crossover probability are adaptively adjusted to dynamically balance the global search ability and local exploration ability. Finally, a Gauss perturbation operator is introduced to generate a random disturbance variation, and to accelerate premature individuals to jump out of local optimization. The algorithm runs independently on five benchmark functions 20 times, and the results show that the ADDSDE algorithm has better global optimization search ability, faster convergence speed and higher accuracy and stability compared with other optimization algorithms, which provide assistance insolving high-dimensionaland complex problems in engineering and information science

    Enhanced parallel Differential Evolution algorithm for problems in computational systems biology

    Get PDF
    [Abstract] Many key problems in computational systems biology and bioinformatics can be formulated and solved using a global optimization framework. The complexity of the underlying mathematical models require the use of efficient solvers in order to obtain satisfactory results in reasonable computation times. Metaheuristics are gaining recognition in this context, with Differential Evolution (DE) as one of the most popular methods. However, for most realistic applications, like those considering parameter estimation in dynamic models, DE still requires excessive computation times. Here we consider this latter class of problems and present several enhancements to DE based on the introduction of additional algorithmic steps and the exploitation of parallelism. In particular, we propose an asynchronous parallel implementation of DE which has been extended with improved heuristics to exploit the specific structure of parameter estimation problems in computational systems biology. The proposed method is evaluated with different types of benchmarks problems: (i) black-box global optimization problems and (ii) calibration of non-linear dynamic models of biological systems, obtaining excellent results both in terms of quality of the solution and regarding speedup and scalability.Ministerio de Economía y Competitividad; DPI2011-28112-C04-03Consejo Superior de Investigaciones Científicas; PIE-201170E018Ministerio de Ciencia e Innovación; TIN2013-42148-PGalicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2013/05

    EvoX: A Distributed GPU-accelerated Library towards Scalable Evolutionary Computation

    Full text link
    During the past decades, evolutionary computation (EC) has demonstrated promising potential in solving various complex optimization problems of relatively small scales. Nowadays, however, ongoing developments in modern science and engineering are bringing increasingly grave challenges to the conventional EC paradigm in terms of scalability. As problem scales increase, on the one hand, the encoding spaces (i.e., dimensions of the decision vectors) are intrinsically larger; on the other hand, EC algorithms often require growing numbers of function evaluations (and probably larger population sizes as well) to work properly. To meet such emerging challenges, not only does it require delicate algorithm designs, but more importantly, a high-performance computing framework is indispensable. Hence, we develop a distributed GPU-accelerated algorithm library -- EvoX. First, we propose a generalized workflow for implementing general EC algorithms. Second, we design a scalable computing framework for running EC algorithms on distributed GPU devices. Third, we provide user-friendly interfaces to both researchers and practitioners for benchmark studies as well as extended real-world applications. To comprehensively assess the performance of EvoX, we conduct a series of experiments, including: (i) scalability test via numerical optimization benchmarks with problem dimensions/population sizes up to millions; (ii) acceleration test via a neuroevolution task with multiple GPU nodes; (iii) extensibility demonstration via the application to reinforcement learning tasks on the OpenAI Gym. The code of EvoX is available at https://github.com/EMI-Group/EvoX

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Opposition-based learning for self-adaptive control parameters in differential evolution for optimal mechanism design

    Get PDF
    In recent decades, new optimization algorithms have attracted much attention from researchers in both gradient- and evolution-based optimal methods. Many strategy techniques are employed to enhance the effectiveness of optimal methods. One of the newest techniques is opposition-based learning (OBL), which shows more power in enhancing various optimization methods. This research presents a new edition of the Differential Evolution (DE) algorithm in which the OBL technique is applied to investigate the opposite point of each candidate of self-adaptive control parameters. In comparison with conventional optimal methods, the proposed method is used to solve benchmark-test optimal problems and applied to real optimizations. Simulation results show the effectiveness and improvement compared with some reference methodologies in terms of the convergence speed and stability of optimal results. © 2019 The Japan Society of Mechanical Engineer

    Speeding up Multiple Instance Learning Classification Rules on GPUs

    Get PDF
    Multiple instance learning is a challenging task in supervised learning and data mining. How- ever, algorithm performance becomes slow when learning from large-scale and high-dimensional data sets. Graphics processing units (GPUs) are being used for reducing computing time of algorithms. This paper presents an implementation of the G3P-MI algorithm on GPUs for solving multiple instance problems using classification rules. The GPU model proposed is distributable to multiple GPUs, seeking for its scal- ability across large-scale and high-dimensional data sets. The proposal is compared to the multi-threaded CPU algorithm with SSE parallelism over a series of data sets. Experimental results report that the com- putation time can be significantly reduced and its scalability improved. Specifically, an speedup of up to 149× can be achieved over the multi-threaded CPU algorithm when using four GPUs, and the rules interpreter achieves great efficiency and runs over 108 billion Genetic Programming operations per second

    A review of population-based metaheuristics for large-scale black-box global optimization: Part B

    Get PDF
    This paper is the second part of a two-part survey series on large-scale global optimization. The first part covered two major algorithmic approaches to large-scale optimization, namely decomposition methods and hybridization methods such as memetic algorithms and local search. In this part we focus on sampling and variation operators, approximation and surrogate modeling, initialization methods, and parallelization. We also cover a range of problem areas in relation to large-scale global optimization, such as multi-objective optimization, constraint handling, overlapping components, the component imbalance issue, and benchmarks, and applications. The paper also includes a discussion on pitfalls and challenges of current research and identifies several potential areas of future research
    corecore