4,706 research outputs found

    Automated offspring sizing in evolutionary algorithms

    Get PDF
    Evolutionary Algorithms (EAs) are a class of algorithms inspired by biological evolution. EAs are applicable to a wide range of problems; however, there are a number of parameters to set in order to use an EA. The performance of an EA is extremely sensitive to these parameter values; setting these parameters often requires expert knowledge of EAs. This prevents EAs from being more widely adopted by nonexperts. Parameter control, the automation of dynamic parameter value selection, has the potential to not only alleviate the burden of parameter tuning, but also to improve performance of EAs on a variety of problem classes in comparison to employing fixed parameter values. The science of parameter control in EAs is, however, still in its infancy and most published work in this area has concentrated on just a subset of the standard parameters. In particular, the control of offspring size has so far received very little attention, despite its importance for balancing exploration and exploitation. This thesis introduces three novel methods for controlling offspring size: Self- Adaptive Offspring Sizing (SAOS), Futility-Based Offspring Sizing (FuBOS), and Diversity-Guided Futility-Based Offspring Sizing (DiGFuBOS). EAs employing these methods are compared to each other and a highly tuned, fixed offspring size EA on a wide range of test problems. It is shown that an EA employing FuBOS or DiGFuBOS performs on par with the highly tuned, fixed offspring size EA on many complex problem instances, while being far more efficient in terms of fitness evaluations. Furthermore, DiGFuBOS does not introduce any new user parameters, thus truly alleviating the burden of tuning the offspring size parameter in EAs --Abstract, page iii

    Evolutionary Reinforcement Learning: A Survey

    Full text link
    Reinforcement learning (RL) is a machine learning approach that trains agents to maximize cumulative rewards through interactions with environments. The integration of RL with deep learning has recently resulted in impressive achievements in a wide range of challenging tasks, including board games, arcade games, and robot control. Despite these successes, there remain several crucial challenges, including brittle convergence properties caused by sensitive hyperparameters, difficulties in temporal credit assignment with long time horizons and sparse rewards, a lack of diverse exploration, especially in continuous search space scenarios, difficulties in credit assignment in multi-agent reinforcement learning, and conflicting objectives for rewards. Evolutionary computation (EC), which maintains a population of learning agents, has demonstrated promising performance in addressing these limitations. This article presents a comprehensive survey of state-of-the-art methods for integrating EC into RL, referred to as evolutionary reinforcement learning (EvoRL). We categorize EvoRL methods according to key research fields in RL, including hyperparameter optimization, policy search, exploration, reward shaping, meta-RL, and multi-objective RL. We then discuss future research directions in terms of efficient methods, benchmarks, and scalable platforms. This survey serves as a resource for researchers and practitioners interested in the field of EvoRL, highlighting the important challenges and opportunities for future research. With the help of this survey, researchers and practitioners can develop more efficient methods and tailored benchmarks for EvoRL, further advancing this promising cross-disciplinary research field

    Exploring Task Mappings on Heterogeneous MPSoCs using a Bias-Elitist Genetic Algorithm

    Get PDF
    Exploration of task mappings plays a crucial role in achieving high performance in heterogeneous multi-processor system-on-chip (MPSoC) platforms. The problem of optimally mapping a set of tasks onto a set of given heterogeneous processors for maximal throughput has been known, in general, to be NP-complete. The problem is further exacerbated when multiple applications (i.e., bigger task sets) and the communication between tasks are also considered. Previous research has shown that Genetic Algorithms (GA) typically are a good choice to solve this problem when the solution space is relatively small. However, when the size of the problem space increases, classic genetic algorithms still suffer from the problem of long evolution times. To address this problem, this paper proposes a novel bias-elitist genetic algorithm that is guided by domain-specific heuristics to speed up the evolution process. Experimental results reveal that our proposed algorithm is able to handle large scale task mapping problems and produces high-quality mapping solutions in only a short time period.Comment: 9 pages, 11 figures, uses algorithm2e.st

    A hybrid EDA for load balancing in multicast with network coding

    Get PDF
    Load balancing is one of the most important issues in the practical deployment of multicast with network coding. However, this issue has received little research attention. This paper studies how traffic load of network coding based multicast (NCM) is disseminated in a communications network, with load balancing considered as an important factor. To this end, a hybridized estimation of distribution algorithm (EDA) is proposed, where two novel schemes are integrated into the population based incremental learning (PBIL) framework to strike a balance between exploration and exploitation, thus enhance the efficiency of the stochastic search. The first scheme is a bi-probability-vector coevolution scheme, where two probability vectors (PVs) evolve independently with periodical individual migration. This scheme can diversify the population and improve the global exploration in the search. The second scheme is a local search heuristic. It is based on the problem-specific domain knowledge and improves the NCM transmission plan at the expense of additional computational time. The heuristic can be utilized either as a local search operator to enhance the local exploitation during the evolutionary process, or as a follow-up operator to improve the best-so-far solutions found after the evolution. Experimental results show the effectiveness of the proposed algorithms against a number of existing evolutionary algorithms
    • …
    corecore