21,869 research outputs found

    Self-adaptation of Genetic Operators Through Genetic Programming Techniques

    Full text link
    Here we propose an evolutionary algorithm that self modifies its operators at the same time that candidate solutions are evolved. This tackles convergence and lack of diversity issues, leading to better solutions. Operators are represented as trees and are evolved using genetic programming (GP) techniques. The proposed approach is tested with real benchmark functions and an analysis of operator evolution is provided.Comment: Presented in GECCO 201

    Approximating the least hypervolume contributor: NP-hard in general, but fast in practice

    Get PDF
    The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most (1+\eps) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P=NPP = NP) nor to approximate it (unless NP=BPPNP = BPP). Nevertheless, in the second part of the paper we present a fast approximation algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0 it calculates a solution with contribution at most (1+\eps) times the minimal contribution with probability at least (1−ή)(1-\delta). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc

    Fitting Analysis using Differential Evolution Optimization (FADO): Spectral population synthesis through genetic optimization under self-consistency boundary conditions

    Full text link
    The goal of population spectral synthesis (PSS) is to decipher from the spectrum of a galaxy the mass, age and metallicity of its constituent stellar populations. This technique has been established as a fundamental tool in extragalactic research. It has been extensively applied to large spectroscopic data sets, notably the SDSS, leading to important insights into the galaxy assembly history. However, despite significant improvements over the past decade, all current PSS codes suffer from two major deficiencies that inhibit us from gaining sharp insights into the star-formation history (SFH) of galaxies and potentially introduce substantial biases in studies of their physical properties (e.g., stellar mass, mass-weighted stellar age and specific star formation rate). These are i) the neglect of nebular emission in spectral fits, consequently, ii) the lack of a mechanism that ensures consistency between the best-fitting SFH and the observed nebular emission characteristics of a star-forming (SF) galaxy. In this article, we present FADO (Fitting Analysis using Differential evolution Optimization): a conceptually novel, publicly available PSS tool with the distinctive capability of permitting identification of the SFH that reproduces the observed nebular characteristics of a SF galaxy. This so-far unique self-consistency concept allows us to significantly alleviate degeneracies in current spectral synthesis. The innovative character of FADO is further augmented by its mathematical foundation: FADO is the first PSS code employing genetic differential evolution optimization. This, in conjunction with other unique elements in its mathematical concept (e.g., optimization of the spectral library using artificial intelligence, convergence test, quasi-parallelization) results in key improvements with respect to computational efficiency and uniqueness of the best-fitting SFHs.Comment: 25 pages, 12 figures, A&A accepte

    Generating Levels That Teach Mechanics

    Get PDF
    The automatic generation of game tutorials is a challenging AI problem. While it is possible to generate annotations and instructions that explain to the player how the game is played, this paper focuses on generating a gameplay experience that introduces the player to a game mechanic. It evolves small levels for the Mario AI Framework that can only be beaten by an agent that knows how to perform specific actions in the game. It uses variations of a perfect A* agent that are limited in various ways, such as not being able to jump high or see enemies, to test how failing to do certain actions can stop the player from beating the level.Comment: 8 pages, 7 figures, PCG Workshop at FDG 2018, 9th International Workshop on Procedural Content Generation (PCG2018

    Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms

    Full text link
    In the field of multi-objective optimization algorithms, multi-objective Bayesian Global Optimization (MOBGO) is an important branch, in addition to evolutionary multi-objective optimization algorithms (EMOAs). MOBGO utilizes Gaussian Process models learned from previous objective function evaluations to decide the next evaluation site by maximizing or minimizing an infill criterion. A common criterion in MOBGO is the Expected Hypervolume Improvement (EHVI), which shows a good performance on a wide range of problems, with respect to exploration and exploitation. However, so far it has been a challenge to calculate exact EHVI values efficiently. In this paper, an efficient algorithm for the computation of the exact EHVI for a generic case is proposed. This efficient algorithm is based on partitioning the integration volume into a set of axis-parallel slices. Theoretically, the upper bound time complexities are improved from previously O(n2)O (n^2) and O(n3)O(n^3), for two- and three-objective problems respectively, to Θ(nlog⁥n)\Theta(n\log n), which is asymptotically optimal. This article generalizes the scheme in higher dimensional case by utilizing a new hyperbox decomposition technique, which was proposed by D{\"a}chert et al, EJOR, 2017. It also utilizes a generalization of the multilayered integration scheme that scales linearly in the number of hyperboxes of the decomposition. The speed comparison shows that the proposed algorithm in this paper significantly reduces computation time. Finally, this decomposition technique is applied in the calculation of the Probability of Improvement (PoI)

    Analysis of Different Types of Regret in Continuous Noisy Optimization

    Get PDF
    The performance measure of an algorithm is a crucial part of its analysis. The performance can be determined by the study on the convergence rate of the algorithm in question. It is necessary to study some (hopefully convergent) sequence that will measure how "good" is the approximated optimum compared to the real optimum. The concept of Regret is widely used in the bandit literature for assessing the performance of an algorithm. The same concept is also used in the framework of optimization algorithms, sometimes under other names or without a specific name. And the numerical evaluation of convergence rate of noisy algorithms often involves approximations of regrets. We discuss here two types of approximations of Simple Regret used in practice for the evaluation of algorithms for noisy optimization. We use specific algorithms of different nature and the noisy sphere function to show the following results. The approximation of Simple Regret, termed here Approximate Simple Regret, used in some optimization testbeds, fails to estimate the Simple Regret convergence rate. We also discuss a recent new approximation of Simple Regret, that we term Robust Simple Regret, and show its advantages and disadvantages.Comment: Genetic and Evolutionary Computation Conference 2016, Jul 2016, Denver, United States. 201
    • 

    corecore