351 research outputs found

    Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort

    Full text link
    Many production-grade algorithms benefit from combining an asymptotically efficient algorithm for solving big problem instances, by splitting them into smaller ones, and an asymptotically inefficient algorithm with a very small implementation constant for solving small subproblems. A well-known example is stable sorting, where mergesort is often combined with insertion sort to achieve a constant but noticeable speed-up. We apply this idea to non-dominated sorting. Namely, we combine the divide-and-conquer algorithm, which has the currently best known asymptotic runtime of O(N(logN)M1)O(N (\log N)^{M - 1}), with the Best Order Sort algorithm, which has the runtime of O(N2M)O(N^2 M) but demonstrates the best practical performance out of quadratic algorithms. Empirical evaluation shows that the hybrid's running time is typically not worse than of both original algorithms, while for large numbers of points it outperforms them by at least 20%. For smaller numbers of objectives, the speedup can be as large as four times.Comment: A two-page abstract of this paper will appear in the proceedings companion of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    Multiobjective evolutionary algorithm based on vector angle neighborhood

    Get PDF
    Selection is a major driving force behind evolution and is a key feature of multiobjective evolutionary algorithms. Selection aims at promoting the survival and reproduction of individuals that are most fitted to a given environment. In the presence of multiple objectives, major challenges faced by this operator come from the need to address both the population convergence and diversity, which are conflicting to a certain extent. This paper proposes a new selection scheme for evolutionary multiobjective optimization. Its distinctive feature is a similarity measure for estimating the population diversity, which is based on the angle between the objective vectors. The smaller the angle, the more similar individuals. The concept of similarity is exploited during the mating by defining the neighborhood and the replacement by determining the most crowded region where the worst individual is identified. The latter is performed on the basis of a convergence measure that plays a major role in guiding the population towards the Pareto optimal front. The proposed algorithm is intended to exploit strengths of decomposition-based approaches in promoting diversity among the population while reducing the user's burden of specifying weight vectors before the search. The proposed approach is validated by computational experiments with state-of-the-art algorithms on problems with different characteristics. The obtained results indicate a highly competitive performance of the proposed approach. Significant advantages are revealed when dealing with problems posing substantial difficulties in keeping diversity, including many-objective problems. The relevance of the suggested similarity and convergence measures are shown. The validity of the approach is also demonstrated on engineering problems.This work was supported by the Portuguese Fundacao para a Ciencia e Tecnologia under grant PEst-C/CTM/LA0025/2013 (Projecto Estrategico - LA 25 - 2013-2014 - Strategic Project - LA 25 - 2013-2014).info:eu-repo/semantics/publishedVersio

    Multi-Objective Archiving

    Full text link
    Most multi-objective optimisation algorithms maintain an archive explicitly or implicitly during their search. Such an archive can be solely used to store high-quality solutions presented to the decision maker, but in many cases may participate in the search process (e.g., as the population in evolutionary computation). Over the last two decades, archiving, the process of comparing new solutions with previous ones and deciding how to update the archive/population, stands as an important issue in evolutionary multi-objective optimisation (EMO). This is evidenced by constant efforts from the community on developing various effective archiving methods, ranging from conventional Pareto-based methods to more recent indicator-based and decomposition-based ones. However, the focus of these efforts is on empirical performance comparison in terms of specific quality indicators; there is lack of systematic study of archiving methods from a general theoretical perspective. In this paper, we attempt to conduct a systematic overview of multi-objective archiving, in the hope of paving the way to understand archiving algorithms from a holistic perspective of theory and practice, and more importantly providing a guidance on how to design theoretically desirable and practically useful archiving algorithms. In doing so, we also present that archiving algorithms based on weakly Pareto compliant indicators (e.g., epsilon-indicator), as long as designed properly, can achieve the same theoretical desirables as archivers based on Pareto compliant indicators (e.g., hypervolume indicator). Such desirables include the property limit-optimal, the limit form of the possible optimal property that a bounded archiving algorithm can have with respect to the most general form of superiority between solution sets.Comment: 21 pages, 4 figures, journa

    Tutorials at PPSN 2016

    Get PDF
    PPSN 2016 hosts a total number of 16 tutorials covering a broad range of current research in evolutionary computation. The tutorials range from introductory to advanced and specialized but can all be attended without prior requirements. All PPSN attendees are cordially invited to take this opportunity to learn about ongoing research activities in our field

    MONEDA: scalable multi-objective optimization with a neural network-based estimation of distribution algorithm

    Get PDF
    The Extension Of Estimation Of Distribution Algorithms (Edas) To The Multiobjective Domain Has Led To Multi-Objective Optimization Edas (Moedas). Most Moedas Have Limited Themselves To Porting Single-Objective Edas To The Multi-Objective Domain. Although Moedas Have Proved To Be A Valid Approach, The Last Point Is An Obstacle To The Achievement Of A Significant Improvement Regarding "Standard" Multi-Objective Optimization Evolutionary Algorithms. Adapting The Model-Building Algorithm Is One Way To Achieve A Substantial Advance. Most Model-Building Schemes Used So Far By Edas Employ Off-The-Shelf Machine Learning Methods. However, The Model-Building Problem Has Particular Requirements That Those Methods Do Not Meet And Even Evade. The Focus Of This Paper Is On The Model- Building Issue And How It Has Not Been Properly Understood And Addressed By Most Moedas. We Delve Down Into The Roots Of This Matter And Hypothesize About Its Causes. To Gain A Deeper Understanding Of The Subject We Propose A Novel Algorithm Intended To Overcome The Draw-Backs Of Current Moedas. This New Algorithm Is The Multi-Objective Neural Estimation Of Distribution Algorithm (Moneda). Moneda Uses A Modified Growing Neural Gas Network For Model-Building (Mb-Gng). Mb-Gng Is A Custom-Made Clustering Algorithm That Meets The Above Demands. Thanks To Its Custom-Made Model-Building Algorithm, The Preservation Of Elite Individuals And Its Individual Replacement Scheme, Moneda Is Capable Of Scalably Solving Continuous Multi-Objective Optimization Problems. It Performs Better Than Similar Algorithms In Terms Of A Set Of Quality Indicators And Computational Resource Requirements.This work has been funded in part by projects CNPq BJT 407851/2012-7, FAPERJ APQ1 211.451/2015, MINECO TEC2014-57022-C2-2-R and TEC2012-37832-C02-01

    A Toolkit for Generating Scalable Stochastic Multiobjective Test Problems

    Get PDF
    Real-world optimization problems typically include uncertainties over various aspects of the problem formulation. Some existing algorithms are designed to cope with stochastic multiobjective optimization problems, but in order to benchmark them, a proper framework still needs to be established. This paper presents a novel toolkit that generates scalable, stochastic, multiobjective optimization problems. A stochastic problem is generated by transforming the objective vectors of a given deterministic test problem into random vectors. All random objective vectors are bounded by the feasible objective space, defined by the deterministic problem. Therefore, the global solution for the deterministic problem can also serve as a reference for the stochastic problem. A simple parametric distribution for the random objective vector is defined in a radial coordinate system, allowing for direct control over the dual challenges of convergence towards the true Pareto front and diversity across the front. An example for a stochastic test problem, generated by the toolkit, is provided
    corecore