3,948 research outputs found

    Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings

    Full text link
    While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the (1+(λ,λ))(1+(\lambda,\lambda))~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size λ\lambda can achieve and is asymptotically optimal also among all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201

    Faster Coordinate Descent via Adaptive Importance Sampling

    Get PDF
    Coordinate descent methods employ random partial updates of decision variables in order to solve huge-scale convex optimization problems. In this work, we introduce new adaptive rules for the random selection of their updates. By adaptive, we mean that our selection rules are based on the dual residual or the primal-dual gap estimates and can change at each iteration. We theoretically characterize the performance of our selection rules and demonstrate improvements over the state-of-the-art, and extend our theory and algorithms to general convex objectives. Numerical evidence with hinge-loss support vector machines and Lasso confirm that the practice follows the theory.Comment: appearing at AISTATS 201

    A view of Estimation of Distribution Algorithms through the lens of Expectation-Maximization

    Full text link
    We show that a large class of Estimation of Distribution Algorithms, including, but not limited to, Covariance Matrix Adaption, can be written as a Monte Carlo Expectation-Maximization algorithm, and as exact EM in the limit of infinite samples. Because EM sits on a rigorous statistical foundation and has been thoroughly analyzed, this connection provides a new coherent framework with which to reason about EDAs

    A Bayesian approach to constrained single- and multi-objective optimization

    Get PDF
    This article addresses the problem of derivative-free (single- or multi-objective) optimization subject to multiple inequality constraints. Both the objective and constraint functions are assumed to be smooth, non-linear and expensive to evaluate. As a consequence, the number of evaluations that can be used to carry out the optimization is very limited, as in complex industrial design optimization problems. The method we propose to overcome this difficulty has its roots in both the Bayesian and the multi-objective optimization literatures. More specifically, an extended domination rule is used to handle objectives and constraints in a unified way, and a corresponding expected hyper-volume improvement sampling criterion is proposed. This new criterion is naturally adapted to the search of a feasible point when none is available, and reduces to existing Bayesian sampling criteria---the classical Expected Improvement (EI) criterion and some of its constrained/multi-objective extensions---as soon as at least one feasible point is available. The calculation and optimization of the criterion are performed using Sequential Monte Carlo techniques. In particular, an algorithm similar to the subset simulation method, which is well known in the field of structural reliability, is used to estimate the criterion. The method, which we call BMOO (for Bayesian Multi-Objective Optimization), is compared to state-of-the-art algorithms for single- and multi-objective constrained optimization

    AMGA: an archive-based micro genetic algorithm for multi-objective optimization

    Get PDF
    In this paper, we propose a new evolutionary algorithm for multi-objective optimization. The proposed algorithm benefits from the existing literature and borrows several concepts from existing multi-objective optimization algorithms. The proposed algorithm employs a new kind of selection procedure which benefits from the search history of the algorithm and attempts to minimize the number of function evaluations required to achieve the desired convergence. The proposed algorithm works with a very small population size and maintains an archive of best and diverse solutions obtained so as to report a large number of non-dominated solutions at the end of the simulation. Improved formulation for some of the existing diversity preservation techniques is also proposed. Certain implementation aspects that facilitate better performance of the algorithm are discussed. Comprehensive benchmarking and comparison of the proposed algorithm with some of the state-of-the-art multi-objective evolutionary algorithms demonstrate the improved search capability of the proposed algorithm

    Revisiting Norm Optimization for Multi-Objective Black-Box Problems: A Finite-Time Analysis

    Full text link
    The complexity of Pareto fronts imposes a great challenge on the convergence analysis of multi-objective optimization methods. While most theoretical convergence studies have addressed finite-set and/or discrete problems, others have provided probabilistic guarantees, assumed a total order on the solutions, or studied their asymptotic behaviour. In this paper, we revisit the Tchebycheff weighted method in a hierarchical bandits setting and provide a finite-time bound on the Pareto-compliant additive ϵ\epsilon-indicator. To the best of our knowledge, this paper is one of few that establish a link between weighted sum methods and quality indicators in finite time.Comment: submitted to Journal of Global Optimization. This article's notation and terminology is based on arXiv:1612.0841
    corecore