147,807 research outputs found

    Experimental study on population-based incremental learning algorithms for dynamic optimization problems

    Get PDF
    Copyright @ Springer-Verlag 2005.Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBILs adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.This work was was supported by UK EPSRC under Grant GR/S79718/01

    Runtime Analysis for Self-adaptive Mutation Rates

    Full text link
    We propose and analyze a self-adaptive version of the (1,λ)(1,\lambda) evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time (number of fitness evaluations) of O(nλ/logλ+nlogn)O(n\lambda/\log\lambda+n\log n) when λ\lambda is at least ClnnC \ln n for some constant C>0C > 0. For all values of λClnn\lambda \ge C \ln n, this performance is asymptotically best possible among all λ\lambda-parallel mutation-based unbiased black-box algorithms. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate proposed by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple endogenous scheme. On the technical side, the paper contributes new tools for the analysis of two-dimensional drift processes arising in the analysis of dynamic parameter choices in EAs, including bounds on occupation probabilities in processes with non-constant drift

    Automatically Modeling Hybrid Evolutionary Algorithms from Past Executions

    Get PDF
    ection of the most appropriate Evolutionary Algorithm for a given optimization problem is a difficult task. Hybrid Evolutionary Algorithms are a promising alternative to deal with this problem. By means of the combination of different heuristic optimization approaches, it is possible to profit from the benefits of the best approach, avoiding the limitations of the others. Nowadays, there is an active research in the design of dynamic or adaptive hybrid algorithms. However, little research has been done in the automatic learning of the best hybridization strategy. This paper proposes a mechanism to learn a strategy based on the analysis of the results from past executions. The proposed algorithm has been evaluated on a well-known benchmark on continuous optimization. The obtained results suggest that the proposed approach is able to learn very promising hybridization strategies

    Toward Intelligent Biped-Humanoids Gaits Generation

    Get PDF
    In this chapter we will highlight our experimental studies on natural human walking analysis and introduce a biologically inspired design for simple bipedal locomotion system of humanoid robots. Inspiration comes directly from human walking analysis and human muscles mechanism and control. A hybrid algorithm for walking gaits generation is then proposed as an innovative alternative to classically used kinematics and dynamic equations solving, the gaits include knee, ankle and hip trajectories. The proposed algorithm is an intelligent evolutionary based on particle swarm optimization paradigm. This proposal can be used for small size humanoid robots, with a knee an ankle and a hip and at least six Degrees of Freedom (DOF).Comment: 15 page

    An analysis of the XOR dynamic problem generator based on the dynamical system

    Get PDF
    This is the post-print version of the article - Copyright @ 2010 Springer-VerlagIn this paper, we use the exact model (or dynamical system approach) to describe the standard evolutionary algorithm (EA) as a discrete dynamical system for dynamic optimization problems (DOPs). Based on this dynamical system model, we analyse the properties of the XOR DOP Generator, which has been widely used by researchers to create DOPs from any binary encoded problem. DOPs generated by this generator are described as DOPs with permutation, where the fitness vector is changed according to a permutation matrix. Some properties of DOPs with permutation are analyzed, which allows explaining some behaviors observed in experimental results. The analysis of the properties of problems created by the XOR DOP Generator is important to understand the results obtained in experiments with this generator and to analyze the similarity of such problems to real world DOPs.This work was supported by Brazil FAPESP under Grant 04/04289-6 and by UK EPSRC under Grant EP/E060722/2

    Towards Dynamic Algorithm Selection for Numerical Black-Box Optimization: Investigating BBOB as a Use Case

    Get PDF
    One of the most challenging problems in evolutionary computation is to select from its family of diverse solvers one that performs well on a given problem. This algorithm selection problem is complicated by the fact that different phases of the optimization process require different search behavior. While this can partly be controlled by the algorithm itself, there exist large differences between algorithm performance. It can therefore be beneficial to swap the configuration or even the entire algorithm during the run. Long deemed impractical, recent advances in Machine Learning and in exploratory landscape analysis give hope that this dynamic algorithm configuration~(dynAC) can eventually be solved by automatically trained configuration schedules. With this work we aim at promoting research on dynAC, by introducing a simpler variant that focuses only on switching between different algorithms, not configurations. Using the rich data from the Black Box Optimization Benchmark~(BBOB) platform, we show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains. We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these
    corecore