521 research outputs found
Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort
Many production-grade algorithms benefit from combining an asymptotically
efficient algorithm for solving big problem instances, by splitting them into
smaller ones, and an asymptotically inefficient algorithm with a very small
implementation constant for solving small subproblems. A well-known example is
stable sorting, where mergesort is often combined with insertion sort to
achieve a constant but noticeable speed-up.
We apply this idea to non-dominated sorting. Namely, we combine the
divide-and-conquer algorithm, which has the currently best known asymptotic
runtime of , with the Best Order Sort algorithm, which
has the runtime of but demonstrates the best practical performance
out of quadratic algorithms.
Empirical evaluation shows that the hybrid's running time is typically not
worse than of both original algorithms, while for large numbers of points it
outperforms them by at least 20%. For smaller numbers of objectives, the
speedup can be as large as four times.Comment: A two-page abstract of this paper will appear in the proceedings
companion of the 2017 Genetic and Evolutionary Computation Conference (GECCO
2017
Multiobjective evolutionary algorithm based on vector angle neighborhood
Selection is a major driving force behind evolution and is a key feature of multiobjective evolutionary algorithms. Selection aims at promoting the survival and reproduction of individuals that are most fitted to a given environment. In the presence of multiple objectives, major challenges faced by this operator come from the need to address both the population convergence and diversity, which are conflicting to a certain extent. This paper proposes a new selection scheme for evolutionary multiobjective optimization. Its distinctive feature is a similarity measure for estimating the population diversity, which is based on the angle between the objective vectors. The smaller the angle, the more similar individuals. The concept of similarity is exploited during the mating by defining the neighborhood and the replacement by determining the most crowded region where the worst individual is identified. The latter is performed on the basis of a convergence measure that plays a major role in guiding the population towards the Pareto optimal front. The proposed algorithm is intended to exploit strengths of decomposition-based approaches in promoting diversity among the population while reducing the user's burden of specifying weight vectors before the search. The proposed approach is validated by computational experiments with state-of-the-art algorithms on problems with different characteristics. The obtained results indicate a highly competitive performance of the proposed approach. Significant advantages are revealed when dealing with problems posing substantial difficulties in keeping diversity, including many-objective problems. The relevance of the suggested similarity and convergence measures are shown. The validity of the approach is also demonstrated on engineering problems.This work was supported by the Portuguese Fundacao para a Ciencia e Tecnologia under grant PEst-C/CTM/LA0025/2013 (Projecto Estrategico - LA 25 - 2013-2014 - Strategic Project - LA 25 - 2013-2014).info:eu-repo/semantics/publishedVersio
Multi-Objective Archiving
Most multi-objective optimisation algorithms maintain an archive explicitly
or implicitly during their search. Such an archive can be solely used to store
high-quality solutions presented to the decision maker, but in many cases may
participate in the search process (e.g., as the population in evolutionary
computation). Over the last two decades, archiving, the process of comparing
new solutions with previous ones and deciding how to update the
archive/population, stands as an important issue in evolutionary
multi-objective optimisation (EMO). This is evidenced by constant efforts from
the community on developing various effective archiving methods, ranging from
conventional Pareto-based methods to more recent indicator-based and
decomposition-based ones. However, the focus of these efforts is on empirical
performance comparison in terms of specific quality indicators; there is lack
of systematic study of archiving methods from a general theoretical
perspective. In this paper, we attempt to conduct a systematic overview of
multi-objective archiving, in the hope of paving the way to understand
archiving algorithms from a holistic perspective of theory and practice, and
more importantly providing a guidance on how to design theoretically desirable
and practically useful archiving algorithms. In doing so, we also present that
archiving algorithms based on weakly Pareto compliant indicators (e.g.,
epsilon-indicator), as long as designed properly, can achieve the same
theoretical desirables as archivers based on Pareto compliant indicators (e.g.,
hypervolume indicator). Such desirables include the property limit-optimal, the
limit form of the possible optimal property that a bounded archiving algorithm
can have with respect to the most general form of superiority between solution
sets.Comment: 21 pages, 4 figures, journa
Tutorials at PPSN 2016
PPSN 2016 hosts a total number of 16 tutorials covering a broad range of current research in evolutionary computation. The tutorials range from introductory to advanced and specialized but can all be attended without prior requirements. All PPSN attendees are cordially invited to take this opportunity to learn about ongoing research activities in our field
Efficient Real-Time Hypervolume Estimation with Monotonically Reducing Error
This is the author accepted manuscript. The final version is available from ACM via the DOI in this recordThe codebase for this paper is available at https://github.com/fieldsend/hypervolumeThe hypervolume (or S-metric) is a widely used quality measure
employed in the assessment of multi- and many-objective evolutionary algorithms. It is also directly integrated as a component in
the selection mechanism of some popular optimisers. Exact hypervolume calculation becomes prohibitively expensive in real-time
applications as the number of objectives increases and/or the approximation set grows. As such, Monte Carlo (MC) sampling is often
used to estimate its value rather than exactly calculating it. This
estimation is inevitably subject to error. As standard with Monte
Carlo approaches, the standard error decreases with the square
root of the number of MC samples. We propose a number of realtime hypervolume estimation methods for unconstrained archives
— principally for use in real-time convergence analysis. Furthermore, we show how the number of domination comparisons can be
considerably reduced by exploiting incremental properties of the
approximated Pareto front. In these methods the estimation error
monotonically decreases over time for (i) a capped budget of samples per algorithm generation and (ii) a fixed budget of dedicated
computation time per optimiser generation for new MC samples.
Results are provided using an illustrative worst-case scenario with
rapid archive growth, demonstrating the orders-of-magnitude of
speed-up possible.Engineering and Physical Sciences Research Council (EPSRC)Innovate U
Peeking beyond peaks:Challenges and research potentials of continuous multimodal multi-objective optimization
Multi-objective (MO) optimization, i.e., the simultaneous optimization of multiple conflicting objectives, is gaining more and more attention in various research areas, such as evolutionary computation, machine learning (e.g., (hyper-)parameter optimization), or logistics (e.g., vehicle routing). Many works in this domain mention the structural problem property of multimodality as a challenge from two classical perspectives: (1) finding all globally optimal solution sets, and (2) avoiding to get trapped in local optima. Interestingly, these streams seem to transfer many traditional concepts of single-objective (SO) optimization into claims, assumptions, or even terminology regarding the MO domain, but mostly neglect the understanding of the structural properties as well as the algorithmic search behavior on a problem's landscape. However, some recent works counteract this trend, by investigating the fundamentals and characteristics of MO problems using new visualization techniques and gaining surprising insights. Using these visual insights, this work proposes a step towards a unified terminology to capture multimodality and locality in a broader way than it is usually done. This enables us to investigate current research activities in multimodal continuous MO optimization and to highlight new implications and promising research directions for the design of benchmark suites, the discovery of MO landscape features, the development of new MO (or even SO) optimization algorithms, and performance indicators. For all these topics, we provide a review of ideas and methods but also an outlook on future challenges, research potential and perspectives that result from recent developments.</p
MONEDA: scalable multi-objective optimization with a neural network-based estimation of distribution algorithm
The Extension Of Estimation Of Distribution Algorithms (Edas) To The Multiobjective Domain Has Led To Multi-Objective Optimization Edas (Moedas). Most Moedas Have Limited Themselves To Porting Single-Objective Edas To The Multi-Objective Domain. Although Moedas Have Proved To Be A Valid Approach, The Last Point Is An Obstacle To The Achievement Of A Significant Improvement Regarding "Standard" Multi-Objective Optimization Evolutionary Algorithms. Adapting The Model-Building Algorithm Is One Way To Achieve A Substantial Advance. Most Model-Building Schemes Used So Far By Edas Employ Off-The-Shelf Machine Learning Methods. However, The Model-Building Problem Has Particular Requirements That Those Methods Do Not Meet And Even Evade. The Focus Of This Paper Is On The Model- Building Issue And How It Has Not Been Properly Understood And Addressed By Most Moedas. We Delve Down Into The Roots Of This Matter And Hypothesize About Its Causes. To Gain A Deeper Understanding Of The Subject We Propose A Novel Algorithm Intended To Overcome The Draw-Backs Of Current Moedas. This New Algorithm Is The Multi-Objective Neural Estimation Of Distribution Algorithm (Moneda). Moneda Uses A Modified Growing Neural Gas Network For Model-Building (Mb-Gng). Mb-Gng Is A Custom-Made Clustering Algorithm That Meets The Above Demands. Thanks To Its Custom-Made Model-Building Algorithm, The Preservation Of Elite Individuals And Its Individual Replacement Scheme, Moneda Is Capable Of Scalably Solving Continuous Multi-Objective Optimization Problems. It Performs Better Than Similar Algorithms In Terms Of A Set Of Quality Indicators And Computational Resource Requirements.This work has been funded in part by projects CNPq BJT 407851/2012-7, FAPERJ APQ1 211.451/2015, MINECO TEC2014-57022-C2-2-R and TEC2012-37832-C02-01
ETEA: A euclidean minimum spanning tree-Based evolutionary algorithm for multiobjective optimization
© the Massachusetts Institute of TechnologyAbstract The Euclidean minimum spanning tree (EMST), widely used in a variety of domains, is a minimum spanning tree of a set of points in the space, where the edge weight between each pair of points is their Euclidean distance. Since the generation of an EMST is entirely determined by the Euclidean distance between solutions (points), the properties of EMSTs have a close relation with the distribution and position information of solutions. This paper explores the properties of EMSTs and proposes an EMST-based Evolutionary Algorithm (ETEA) to solve multiobjective optimization problems (MOPs). Unlike most EMO algorithms that focus on the Pareto dominance relation, the proposed algorithm mainly considers distance-based measures to evaluate and compare individuals during the evolutionary search. Specifically in ETEA, four strategies are introduced: 1) An EMST-based crowding distance (ETCD) is presented to estimate the density of individuals in the population; 2) A distance comparison approach incorporating ETCD is used to assign the fitness value for individuals; 3) A fitness adjustment technique is designed to avoid the partial overcrowding in environmental selection; 4) Three diversity indicators-the minimum edge, degree, and ETCD-with regard to EMSTs are applied to determine the survival of individuals in archive truncation. From a series of extensive experiments on 32 test instances with different characteristics, ETEA is found to be competitive against five state-of-the-art algorithms and its predecessor in providing a good balance among convergence, uniformity, and spread.Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under
Grant EP/K001310/1, and the National Natural Science Foundation of China under Grant 61070088
- …