1,973 research outputs found

    Explicit Building-Block Multiobjective Genetic Algorithms: Theory, Analysis, and Developing

    Get PDF
    This dissertation research emphasizes explicit Building Block (BB) based MO EAs performance and detailed symbolic representation. An explicit BB-based MOEA for solving constrained and real-world MOPs is developed the Multiobjective Messy Genetic Algorithm II (MOMGA-II) which is designed to validate symbolic BB concepts. The MOMGA-II demonstrates that explicit BB-based MOEAs provide insight into solving difficult MOPs that is generally not realized through the use of implicit BB-based MOEA approaches. This insight is necessary to increase the effectiveness of all MOEA approaches. In order to increase MOEA computational efficiency parallelization of MOEAs is addressed. Communications between processors in a parallel MOEA implementation is extremely important, hence innovative migration and replacement schemes for use in parallel MOEAs are detailed and tested. These parallel concepts support the development of the first explicit BB-based parallel MOEA the pMOMGA-II. MOEA theory is also advanced through the derivation of the first MOEA population sizing theory. The multiobjective population sizing theory presented derives the MOEA population size necessary in order to achieve good results within a specified level of confidence. Just as in the single objective approach the MOEA population sizing theory presents a very conservative sizing estimate. Validated results illustrate insight into building block phenomena good efficiency excellent effectiveness and motivation for future research in the area of explicit BB-based MOEAs. Thus the generic results of this research effort have applicability that aid in solving many different MOPs

    A self-learning particle swarm optimizer for global optimization problems

    Get PDF
    Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2

    Competent Program Evolution, Doctoral Dissertation, December 2006

    Get PDF
    Heuristic optimization methods are adaptive when they sample problem solutions based on knowledge of the search space gathered from past sampling. Recently, competent evolutionary optimization methods have been developed that adapt via probabilistic modeling of the search space. However, their effectiveness requires the existence of a compact problem decomposition in terms of prespecified solution parameters. How can we use these techniques to effectively and reliably solve program learning problems, given that program spaces will rarely have compact decompositions? One method is to manually build a problem-specific representation that is more tractable than the general space. But can this process be automated? My thesis is that the properties of programs and program spaces can be leveraged as inductive bias to reduce the burden of manual representation-building, leading to competent program evolution. The central contributions of this dissertation are a synthesis of the requirements for competent program evolution, and the design of a procedure, meta-optimizing semantic evolutionary search (MOSES), that meets these requirements. In support of my thesis, experimental results are provided to analyze and verify the effectiveness of MOSES, demonstrating scalability and real-world applicability

    Evolutionary Reinforcement Learning: A Survey

    Full text link
    Reinforcement learning (RL) is a machine learning approach that trains agents to maximize cumulative rewards through interactions with environments. The integration of RL with deep learning has recently resulted in impressive achievements in a wide range of challenging tasks, including board games, arcade games, and robot control. Despite these successes, there remain several crucial challenges, including brittle convergence properties caused by sensitive hyperparameters, difficulties in temporal credit assignment with long time horizons and sparse rewards, a lack of diverse exploration, especially in continuous search space scenarios, difficulties in credit assignment in multi-agent reinforcement learning, and conflicting objectives for rewards. Evolutionary computation (EC), which maintains a population of learning agents, has demonstrated promising performance in addressing these limitations. This article presents a comprehensive survey of state-of-the-art methods for integrating EC into RL, referred to as evolutionary reinforcement learning (EvoRL). We categorize EvoRL methods according to key research fields in RL, including hyperparameter optimization, policy search, exploration, reward shaping, meta-RL, and multi-objective RL. We then discuss future research directions in terms of efficient methods, benchmarks, and scalable platforms. This survey serves as a resource for researchers and practitioners interested in the field of EvoRL, highlighting the important challenges and opportunities for future research. With the help of this survey, researchers and practitioners can develop more efficient methods and tailored benchmarks for EvoRL, further advancing this promising cross-disciplinary research field

    Novelty-assisted Interactive Evolution Of Control Behaviors

    Get PDF
    The field of evolutionary computation is inspired by the achievements of natural evolution, in which there is no final objective. Yet the pursuit of objectives is ubiquitous in simulated evolution because evolutionary algorithms that can consistently achieve established benchmarks are lauded as successful, thus reinforcing this paradigm. A significant problem is that such objective approaches assume that intermediate stepping stones will increasingly resemble the final objective when in fact they often do not. The consequence is that while solutions may exist, searching for such objectives may not discover them. This problem with objectives is demonstrated through an experiment in this dissertation that compares how images discovered serendipitously during interactive evolution in an online system called Picbreeder cannot be rediscovered when they become the final objective of the very same algorithm that originally evolved them. This negative result demonstrates that pursuing an objective limits evolution by selecting offspring only based on the final objective. Furthermore, even when high fitness is achieved, the experimental results suggest that the resulting solutions are typically brittle, piecewise representations that only perform well by exploiting idiosyncratic features in the target. In response to this problem, the dissertation next highlights the importance of leveraging human insight during search as an alternative to articulating explicit objectives. In particular, a new approach called novelty-assisted interactive evolutionary computation (NA-IEC) combines human intuition with a method called novelty search for the first time to facilitate the serendipitous discovery of agent behaviors. iii In this approach, the human user directs evolution by selecting what is interesting from the on-screen population of behaviors. However, unlike in typical IEC, the user can then request that the next generation be filled with novel descendants, as opposed to only the direct descendants of typical IEC. The result of such an approach, unconstrained by a priori objectives, is that it traverses key stepping stones that ultimately accumulate meaningful domain knowledge. To establishes this new evolutionary approach based on the serendipitous discovery of key stepping stones during evolution, this dissertation consists of four key contributions: (1) The first contribution establishes the deleterious effects of a priori objectives on evolution. The second (2) introduces the NA-IEC approach as an alternative to traditional objective-based approaches. The third (3) is a proof-of-concept that demonstrates how combining human insight with novelty search finds solutions significantly faster and at lower genomic complexities than fully-automated processes, including pure novelty search, suggesting an important role for human users in the search for solutions. Finally, (4) the NA-IEC approach is applied in a challenge domain wherein leveraging human intuition and domain knowledge accelerates the evolution of solutions for the nontrivial octopus-arm control task. The culmination of these contributions demonstrates the importance of incorporating human insights into simulated evolution as a means to discovering better solutions more rapidly than traditional approaches

    Evolutionary Computation

    Get PDF
    This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field

    Multi-objective Estimation of Distribution Algorithm Based on Joint Modeling of Objectives and Variables

    Full text link
    This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms
    corecore