944 research outputs found

    A test problem for visual investigation of high-dimensional multi-objective search

    Get PDF
    An inherent problem in multiobjective optimization is that the visual observation of solution vectors with four or more objectives is infeasible, which brings major difficulties for algorithmic design, examination, and development. This paper presents a test problem, called the Rectangle problem, to aid the visual investigation of high-dimensional multiobjective search. Key features of the Rectangle problem are that the Pareto optimal solutions 1) lie in a rectangle in the two-variable decision space and 2) are similar (in the sense of Euclidean geometry) to their images in the four-dimensional objective space. In this case, it is easy to examine the behavior of objective vectors in terms of both convergence and diversity, by observing their proximity to the optimal rectangle and their distribution in the rectangle, respectively, in the decision space. Fifteen algorithms are investigated. Underperformance of Pareto-based algorithms as well as most state-of-the-art many-objective algorithms indicates that the proposed problem not only is a good tool to help visually understand the behavior of multiobjective search in a high-dimensional objective space but also can be used as a challenging benchmark function to test algorithms' ability in balancing the convergence and diversity of solutions

    A Convergence indicator for Multi-Objective Optimisation Algorithms

    Get PDF
    The algorithms of multi-objective optimisation had a relative growth in the last years. Thereby, it's requires some way of comparing the results of these. In this sense, performance measures play a key role. In general, it's considered some properties of these algorithms such as capacity, convergence, diversity or convergence-diversity. There are some known measures such as generational distance (GD), inverted generational distance (IGD), hypervolume (HV), Spread(Δ\Delta), Averaged Hausdorff distance (Δp\Delta_p), R2-indicator, among others. In this paper, we focuses on proposing a new indicator to measure convergence based on the traditional formula for Shannon entropy. The main features about this measure are: 1) It does not require tho know the true Pareto set and 2) Medium computational cost when compared with Hypervolume.Comment: Submitted to TEM

    Approximating the least hypervolume contributor: NP-hard in general, but fast in practice

    Get PDF
    The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most (1+\eps) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P=NPP = NP) nor to approximate it (unless NP=BPPNP = BPP). Nevertheless, in the second part of the paper we present a fast approximation algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0 it calculates a solution with contribution at most (1+\eps) times the minimal contribution with probability at least (1δ)(1-\delta). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc

    A nature-inspired multi-objective optimisation strategy based on a new reduced space searching algorithm for the design of alloy steels

    Get PDF
    In this paper, a salient search and optimisation algorithm based on a new reduced space searching strategy, is presented. This algorithm originates from an idea which relates to a simple experience when humans search for an optimal solution to a ‘real-life’ problem, i.e. when humans search for a candidate solution given a certain objective, a large area tends to be scanned first; should one succeed in finding clues in relation to the predefined objective, then the search space is greatly reduced for a more detailed search. Furthermore, this new algorithm is extended to the multi-objective optimisation case. Simulation results of optimising some challenging benchmark problems suggest that both the proposed single objective and multi-objective optimisation algorithms outperform some of the other well-known Evolutionary Algorithms (EAs). The proposed algorithms are further applied successfully to the optimal design problem of alloy steels, which aims at determining the optimal heat treatment regime and the required weight percentages for chemical composites to obtain the desired mechanical properties of steel hence minimising production costs and achieving the overarching aim of ‘right-first-time production’ of metals

    Component-wise Analysis of Automatically Designed Multiobjective Algorithms on Constrained Problems

    Full text link
    The performance of multiobjective algorithms varies across problems, making it hard to develop new algorithms or apply existing ones to new problems. To simplify the development and application of new multiobjective algorithms, there has been an increasing interest in their automatic design from component parts. These automatically designed metaheuristics can outperform their human-developed counterparts. However, it is still uncertain what are the most influential components leading to their performance improvement. This study introduces a new methodology to investigate the effects of the final configuration of an automatically designed algorithm. We apply this methodology to a well-performing Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) designed by the irace package on nine constrained problems. We then contrast the impact of the algorithm components in terms of their Search Trajectory Networks (STNs), the diversity of the population, and the hypervolume. Our results indicate that the most influential components were the restart and update strategies, with higher increments in performance and more distinct metric values. Also, their relative influence depends on the problem difficulty: not using the restart strategy was more influential in problems where MOEA/D performs better; while the update strategy was more influential in problems where MOEA/D performs the worst

    Methods for many-objective optimization: an analysis

    Get PDF
    Decomposition-based methods are often cited as the solution to problems related with many-objective optimization. Decomposition-based methods employ a scalarizing function to reduce a many-objective problem into a set of single objective problems, which upon solution yields a good approximation of the set of optimal solutions. This set is commonly referred to as Pareto front. In this work we explore the implications of using decomposition-based methods over Pareto-based methods from a probabilistic point of view. Namely, we investigate whether there is an advantage of using a decomposition-based method, for example using the Chebyshev scalarizing function, over Paretobased methods

    ETEA: A euclidean minimum spanning tree-Based evolutionary algorithm for multiobjective optimization

    Get PDF
    © the Massachusetts Institute of TechnologyAbstract The Euclidean minimum spanning tree (EMST), widely used in a variety of domains, is a minimum spanning tree of a set of points in the space, where the edge weight between each pair of points is their Euclidean distance. Since the generation of an EMST is entirely determined by the Euclidean distance between solutions (points), the properties of EMSTs have a close relation with the distribution and position information of solutions. This paper explores the properties of EMSTs and proposes an EMST-based Evolutionary Algorithm (ETEA) to solve multiobjective optimization problems (MOPs). Unlike most EMO algorithms that focus on the Pareto dominance relation, the proposed algorithm mainly considers distance-based measures to evaluate and compare individuals during the evolutionary search. Specifically in ETEA, four strategies are introduced: 1) An EMST-based crowding distance (ETCD) is presented to estimate the density of individuals in the population; 2) A distance comparison approach incorporating ETCD is used to assign the fitness value for individuals; 3) A fitness adjustment technique is designed to avoid the partial overcrowding in environmental selection; 4) Three diversity indicators-the minimum edge, degree, and ETCD-with regard to EMSTs are applied to determine the survival of individuals in archive truncation. From a series of extensive experiments on 32 test instances with different characteristics, ETEA is found to be competitive against five state-of-the-art algorithms and its predecessor in providing a good balance among convergence, uniformity, and spread.Engineering and Physical Sciences Research Council (EPSRC) of the United Kingdom under Grant EP/K001310/1, and the National Natural Science Foundation of China under Grant 61070088

    Study of the sequential constraint-handling technique for evolutionary optimization with application to structural problems

    Get PDF
    Engineering design problems are most frequently charac-terized by constraints that make them hard to solve and time-consuming. When evolutionary algorithms are used to solve these problems, constraints are often handled with the generic weighted sum method or with techniques specific to the prob-lem at hand. Most commonly, all constraints are evaluated at each generation, and it is also necessary to fine-tune different parameters in order to receive good results, which requires in-depth knowledge of the algorithm. The sequential constraint-handling techniques seem to be a promising alternative, be-cause they do not require all constraints to be evaluated at each iteration and they are easy to implement. They neverthe-less require the user to determine the ordering in which those constraints shall be evaluated. Therefore two heuristics that allow finding a satisfying constraint sequence have been developed. Two sequential constraint-handling techniques using the heuristics have been tested against the weighted sum technique with the ten-bar structure benchmark. They both performed better than the weighted sum technique and can therefore be easy to implement, and powerful alternatives for solving engineering design problems
    corecore