540 research outputs found

    Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort

    Full text link
    Many production-grade algorithms benefit from combining an asymptotically efficient algorithm for solving big problem instances, by splitting them into smaller ones, and an asymptotically inefficient algorithm with a very small implementation constant for solving small subproblems. A well-known example is stable sorting, where mergesort is often combined with insertion sort to achieve a constant but noticeable speed-up. We apply this idea to non-dominated sorting. Namely, we combine the divide-and-conquer algorithm, which has the currently best known asymptotic runtime of O(N(logN)M1)O(N (\log N)^{M - 1}), with the Best Order Sort algorithm, which has the runtime of O(N2M)O(N^2 M) but demonstrates the best practical performance out of quadratic algorithms. Empirical evaluation shows that the hybrid's running time is typically not worse than of both original algorithms, while for large numbers of points it outperforms them by at least 20%. For smaller numbers of objectives, the speedup can be as large as four times.Comment: A two-page abstract of this paper will appear in the proceedings companion of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    Multi-Objective Archiving

    Full text link
    Most multi-objective optimisation algorithms maintain an archive explicitly or implicitly during their search. Such an archive can be solely used to store high-quality solutions presented to the decision maker, but in many cases may participate in the search process (e.g., as the population in evolutionary computation). Over the last two decades, archiving, the process of comparing new solutions with previous ones and deciding how to update the archive/population, stands as an important issue in evolutionary multi-objective optimisation (EMO). This is evidenced by constant efforts from the community on developing various effective archiving methods, ranging from conventional Pareto-based methods to more recent indicator-based and decomposition-based ones. However, the focus of these efforts is on empirical performance comparison in terms of specific quality indicators; there is lack of systematic study of archiving methods from a general theoretical perspective. In this paper, we attempt to conduct a systematic overview of multi-objective archiving, in the hope of paving the way to understand archiving algorithms from a holistic perspective of theory and practice, and more importantly providing a guidance on how to design theoretically desirable and practically useful archiving algorithms. In doing so, we also present that archiving algorithms based on weakly Pareto compliant indicators (e.g., epsilon-indicator), as long as designed properly, can achieve the same theoretical desirables as archivers based on Pareto compliant indicators (e.g., hypervolume indicator). Such desirables include the property limit-optimal, the limit form of the possible optimal property that a bounded archiving algorithm can have with respect to the most general form of superiority between solution sets.Comment: 21 pages, 4 figures, journa

    Multiobjective evolutionary algorithm based on vector angle neighborhood

    Get PDF
    Selection is a major driving force behind evolution and is a key feature of multiobjective evolutionary algorithms. Selection aims at promoting the survival and reproduction of individuals that are most fitted to a given environment. In the presence of multiple objectives, major challenges faced by this operator come from the need to address both the population convergence and diversity, which are conflicting to a certain extent. This paper proposes a new selection scheme for evolutionary multiobjective optimization. Its distinctive feature is a similarity measure for estimating the population diversity, which is based on the angle between the objective vectors. The smaller the angle, the more similar individuals. The concept of similarity is exploited during the mating by defining the neighborhood and the replacement by determining the most crowded region where the worst individual is identified. The latter is performed on the basis of a convergence measure that plays a major role in guiding the population towards the Pareto optimal front. The proposed algorithm is intended to exploit strengths of decomposition-based approaches in promoting diversity among the population while reducing the user's burden of specifying weight vectors before the search. The proposed approach is validated by computational experiments with state-of-the-art algorithms on problems with different characteristics. The obtained results indicate a highly competitive performance of the proposed approach. Significant advantages are revealed when dealing with problems posing substantial difficulties in keeping diversity, including many-objective problems. The relevance of the suggested similarity and convergence measures are shown. The validity of the approach is also demonstrated on engineering problems.This work was supported by the Portuguese Fundacao para a Ciencia e Tecnologia under grant PEst-C/CTM/LA0025/2013 (Projecto Estrategico - LA 25 - 2013-2014 - Strategic Project - LA 25 - 2013-2014).info:eu-repo/semantics/publishedVersio

    MONEDA: scalable multi-objective optimization with a neural network-based estimation of distribution algorithm

    Get PDF
    The Extension Of Estimation Of Distribution Algorithms (Edas) To The Multiobjective Domain Has Led To Multi-Objective Optimization Edas (Moedas). Most Moedas Have Limited Themselves To Porting Single-Objective Edas To The Multi-Objective Domain. Although Moedas Have Proved To Be A Valid Approach, The Last Point Is An Obstacle To The Achievement Of A Significant Improvement Regarding "Standard" Multi-Objective Optimization Evolutionary Algorithms. Adapting The Model-Building Algorithm Is One Way To Achieve A Substantial Advance. Most Model-Building Schemes Used So Far By Edas Employ Off-The-Shelf Machine Learning Methods. However, The Model-Building Problem Has Particular Requirements That Those Methods Do Not Meet And Even Evade. The Focus Of This Paper Is On The Model- Building Issue And How It Has Not Been Properly Understood And Addressed By Most Moedas. We Delve Down Into The Roots Of This Matter And Hypothesize About Its Causes. To Gain A Deeper Understanding Of The Subject We Propose A Novel Algorithm Intended To Overcome The Draw-Backs Of Current Moedas. This New Algorithm Is The Multi-Objective Neural Estimation Of Distribution Algorithm (Moneda). Moneda Uses A Modified Growing Neural Gas Network For Model-Building (Mb-Gng). Mb-Gng Is A Custom-Made Clustering Algorithm That Meets The Above Demands. Thanks To Its Custom-Made Model-Building Algorithm, The Preservation Of Elite Individuals And Its Individual Replacement Scheme, Moneda Is Capable Of Scalably Solving Continuous Multi-Objective Optimization Problems. It Performs Better Than Similar Algorithms In Terms Of A Set Of Quality Indicators And Computational Resource Requirements.This work has been funded in part by projects CNPq BJT 407851/2012-7, FAPERJ APQ1 211.451/2015, MINECO TEC2014-57022-C2-2-R and TEC2012-37832-C02-01

    Peeking beyond peaks:Challenges and research potentials of continuous multimodal multi-objective optimization

    Get PDF
    Multi-objective (MO) optimization, i.e., the simultaneous optimization of multiple conflicting objectives, is gaining more and more attention in various research areas, such as evolutionary computation, machine learning (e.g., (hyper-)parameter optimization), or logistics (e.g., vehicle routing). Many works in this domain mention the structural problem property of multimodality as a challenge from two classical perspectives: (1) finding all globally optimal solution sets, and (2) avoiding to get trapped in local optima. Interestingly, these streams seem to transfer many traditional concepts of single-objective (SO) optimization into claims, assumptions, or even terminology regarding the MO domain, but mostly neglect the understanding of the structural properties as well as the algorithmic search behavior on a problem's landscape. However, some recent works counteract this trend, by investigating the fundamentals and characteristics of MO problems using new visualization techniques and gaining surprising insights. Using these visual insights, this work proposes a step towards a unified terminology to capture multimodality and locality in a broader way than it is usually done. This enables us to investigate current research activities in multimodal continuous MO optimization and to highlight new implications and promising research directions for the design of benchmark suites, the discovery of MO landscape features, the development of new MO (or even SO) optimization algorithms, and performance indicators. For all these topics, we provide a review of ideas and methods but also an outlook on future challenges, research potential and perspectives that result from recent developments.</p

    Efficient Real-Time Hypervolume Estimation with Monotonically Reducing Error

    Get PDF
    This is the author accepted manuscript. The final version is available from ACM via the DOI in this recordThe codebase for this paper is available at https://github.com/fieldsend/hypervolumeThe hypervolume (or S-metric) is a widely used quality measure employed in the assessment of multi- and many-objective evolutionary algorithms. It is also directly integrated as a component in the selection mechanism of some popular optimisers. Exact hypervolume calculation becomes prohibitively expensive in real-time applications as the number of objectives increases and/or the approximation set grows. As such, Monte Carlo (MC) sampling is often used to estimate its value rather than exactly calculating it. This estimation is inevitably subject to error. As standard with Monte Carlo approaches, the standard error decreases with the square root of the number of MC samples. We propose a number of realtime hypervolume estimation methods for unconstrained archives — principally for use in real-time convergence analysis. Furthermore, we show how the number of domination comparisons can be considerably reduced by exploiting incremental properties of the approximated Pareto front. In these methods the estimation error monotonically decreases over time for (i) a capped budget of samples per algorithm generation and (ii) a fixed budget of dedicated computation time per optimiser generation for new MC samples. Results are provided using an illustrative worst-case scenario with rapid archive growth, demonstrating the orders-of-magnitude of speed-up possible.Engineering and Physical Sciences Research Council (EPSRC)Innovate U

    DeepSQLi: Deep Semantic Learning for Testing SQL Injection

    Full text link
    Security is unarguably the most serious concern for Web applications, to which SQL injection (SQLi) attack is one of the most devastating attacks. Automatically testing SQLi vulnerabilities is of ultimate importance, yet is unfortunately far from trivial to implement. This is because the existence of a huge, or potentially infinite, number of variants and semantic possibilities of SQL leading to SQLi attacks on various Web applications. In this paper, we propose a deep natural language processing based tool, dubbed DeepSQLi, to generate test cases for detecting SQLi vulnerabilities. Through adopting deep learning based neural language model and sequence of words prediction, DeepSQLi is equipped with the ability to learn the semantic knowledge embedded in SQLi attacks, allowing it to translate user inputs (or a test case) into a new test case, which is semantically related and potentially more sophisticated. Experiments are conducted to compare DeepSQLi with SQLmap, a state-of-the-art SQLi testing automation tool, on six real-world Web applications that are of different scales, characteristics and domains. Empirical results demonstrate the effectiveness and the remarkable superiority of DeepSQLi over SQLmap, such that more SQLi vulnerabilities can be identified by using a less number of test cases, whilst running much faster
    corecore