581 research outputs found

    PasMoQAP: A Parallel Asynchronous Memetic Algorithm for solving the Multi-Objective Quadratic Assignment Problem

    Full text link
    Multi-Objective Optimization Problems (MOPs) have attracted growing attention during the last decades. Multi-Objective Evolutionary Algorithms (MOEAs) have been extensively used to address MOPs because are able to approximate a set of non-dominated high-quality solutions. The Multi-Objective Quadratic Assignment Problem (mQAP) is a MOP. The mQAP is a generalization of the classical QAP which has been extensively studied, and used in several real-life applications. The mQAP is defined as having as input several flows between the facilities which generate multiple cost functions that must be optimized simultaneously. In this study, we propose PasMoQAP, a parallel asynchronous memetic algorithm to solve the Multi-Objective Quadratic Assignment Problem. PasMoQAP is based on an island model that structures the population by creating sub-populations. The memetic algorithm on each island individually evolve a reduced population of solutions, and they asynchronously cooperate by sending selected solutions to the neighboring islands. The experimental results show that our approach significatively outperforms all the island-based variants of the multi-objective evolutionary algorithm NSGA-II. We show that PasMoQAP is a suitable alternative to solve the Multi-Objective Quadratic Assignment Problem.Comment: 8 pages, 3 figures, 2 tables. Accepted at Conference on Evolutionary Computation 2017 (CEC 2017

    On the Impact of Multiobjective Scalarizing Functions

    Get PDF
    Recently, there has been a renewed interest in decomposition-based approaches for evolutionary multiobjective optimization. However, the impact of the choice of the underlying scalarizing function(s) is still far from being well understood. In this paper, we investigate the behavior of different scalarizing functions and their parameters. We thereby abstract firstly from any specific algorithm and only consider the difficulty of the single scalarized problems in terms of the search ability of a (1+lambda)-EA on biobjective NK-landscapes. Secondly, combining the outcomes of independent single-objective runs allows for more general statements on set-based performance measures. Finally, we investigate the correlation between the opening angle of the scalarizing function's underlying contour lines and the position of the final solution in the objective space. Our analysis is of fundamental nature and sheds more light on the key characteristics of multiobjective scalarizing functions.Comment: appears in Parallel Problem Solving from Nature - PPSN XIII, Ljubljana : Slovenia (2014

    Multiplicative Approximations, Optimal Hypervolume Distributions, and the Choice of the Reference Point

    Full text link
    Many optimization problems arising in applications have to consider several objective functions at the same time. Evolutionary algorithms seem to be a very natural choice for dealing with multi-objective problems as the population of such an algorithm can be used to represent the trade-offs with respect to the given objective functions. In this paper, we contribute to the theoretical understanding of evolutionary algorithms for multi-objective problems. We consider indicator-based algorithms whose goal is to maximize the hypervolume for a given problem by distributing {\mu} points on the Pareto front. To gain new theoretical insights into the behavior of hypervolume-based algorithms we compare their optimization goal to the goal of achieving an optimal multiplicative approximation ratio. Our studies are carried out for different Pareto front shapes of bi-objective problems. For the class of linear fronts and a class of convex fronts, we prove that maximizing the hypervolume gives the best possible approximation ratio when assuming that the extreme points have to be included in both distributions of the points on the Pareto front. Furthermore, we investigate the choice of the reference point on the approximation behavior of hypervolume-based approaches and examine Pareto fronts of different shapes by numerical calculations

    Investigating Normalization Bounds for Hypervolume-Based Infill Criterion for Expensive Multiobjective Optimization

    Full text link
    While solving expensive multi-objective optimization problems, there may be stringent limits on the number of allowed function evaluations. Surrogate models are commonly used for such problems where calls to surrogates are made in lieu of calls to the true objective functions. The surrogates can also be used to identify infill points for evaluation, i.e., solutions that maximize certain performance criteria. One such infill criteria is the maximization of predicted hypervolume, which is the focus of this study. In particular, we are interested in investigating if better estimate of the normalization bounds could help in improving the performance of the surrogate assisted optimization algorithm. Towards this end, we propose a strategy to identify a better ideal point than the one that exists in the current archive. Numerical experiments are conducted on a range of problems to test the efficacy of the proposed method. The approach outperforms conventional forms of normalization in some cases, while providing comparable results for others. We provide critical insights on the search behavior and relate them with the underlying properties of the test problems

    Shift-based density estimation for pareto-based algorithms in many-objective optimization

    Get PDF
    It is commonly accepted that Pareto-based evolutionary multiobjective optimization (EMO) algorithms encounter difficulties in dealing with many-objective problems. In these algorithms, the ineffectiveness of the Pareto dominance relation for a high-dimensional space leads diversity maintenance mechanisms to play the leading role during the evolutionary process, while the preference of diversity maintenance mechanisms for individuals in sparse regions results in the final solutions distributed widely over the objective space but distant from the desired Pareto front. Intuitively, there are two ways to address this problem: 1) modifying the Pareto dominance relation and 2) modifying the diversity maintenance mechanism in the algorithm. In this paper, we focus on the latter and propose a shift-based density estimation (SDE) strategy. The aim of our study is to develop a general modification of density estimation in order to make Pareto-based algorithms suitable for many-objective optimization. In contrast to traditional density estimation that only involves the distribution of individuals in the population, SDE covers both the distribution and convergence information of individuals. The application of SDE in three popular Pareto-based algorithms demonstrates its usefulness in handling many-objective problems. Moreover, an extensive comparison with five state-of-the-art EMO algorithms reveals its competitiveness in balancing convergence and diversity of solutions. These findings not only show that SDE is a good alternative to tackle many-objective problems, but also present a general extension of Pareto-based algorithms in many-objective optimization. © 2013 IEEE

    Component-wise Analysis of Automatically Designed Multiobjective Algorithms on Constrained Problems

    Full text link
    The performance of multiobjective algorithms varies across problems, making it hard to develop new algorithms or apply existing ones to new problems. To simplify the development and application of new multiobjective algorithms, there has been an increasing interest in their automatic design from component parts. These automatically designed metaheuristics can outperform their human-developed counterparts. However, it is still uncertain what are the most influential components leading to their performance improvement. This study introduces a new methodology to investigate the effects of the final configuration of an automatically designed algorithm. We apply this methodology to a well-performing Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) designed by the irace package on nine constrained problems. We then contrast the impact of the algorithm components in terms of their Search Trajectory Networks (STNs), the diversity of the population, and the hypervolume. Our results indicate that the most influential components were the restart and update strategies, with higher increments in performance and more distinct metric values. Also, their relative influence depends on the problem difficulty: not using the restart strategy was more influential in problems where MOEA/D performs better; while the update strategy was more influential in problems where MOEA/D performs the worst

    Objective reduction in many-objective optimization problems

    Get PDF
    Many-objective optimization problems (MaOPs) are multi-objective optimization problems which have more than three objectives. MaOPs face significant challenges because of search efficiency, computational cost, decision making, and visualization. Many well-known multi-objective evolutionary algorithms do not scale well with an increasing number of objectives. The objective reduction can alleviate such difficulties. However, most research in objective reduction use non-dominated sorting or Pareto ranking. However, Pareto is effective in problems having less than four objectives. In this research, we use two approaches to objective reduction: random-based and linear coefficient-based. We use the sum of ranks instead of Pareto Ranking. When applied to many-objective problems, the sum of ranks has outperformed many other optimization approaches. We also use the age layered population structure (ALPS). We use ALPS in our approach to remove premature convergence and improve results. The performance of the proposed methods has been studied extensively on the famous benchmark problem DTLZ. The original GA and ALPS outperform the objective reduction algorithms in many test cases of DTLZ. Among all reduction algorithms, a linear coefficient based reduction algorithm provides better performance for some problems in this test suite. Random based reduction is not an appropriate strategy for reducing objectives

    On the Construction of Pareto-Compliant Combined Indicators

    Get PDF
    The most relevant property that a quality indicator (QI) is expected to have is Pareto compliance, which means that every time an approximation set strictly dominates another in a Pareto sense, the indicator must reflect this. The hypervolume indicator and its variants are the only unary QIs known to be Pareto-compliant but there are many commonly used weakly Pareto-compliant indicators such as R2, IGD+,andɛ+. Currently, an open research area is related to finding new Pareto-compliant indicators whose preferences are different from those of the hypervolume indicator. In this article, we propose a theoretical basis to combine existing weakly Pareto-compliant indicators with at least one being Pareto-compliant, such that the resulting combined indicator is Pareto-compliant as well. Most importantly, we show that the combination of Paretocompliant QIs with weakly Pareto-compliant indicators leads to indicators that inherit properties of the weakly compliant indicators in terms of optimal point distributions. The consequences of these new combined indicators are threefold: (1) to increase the variety of available Pareto-compliant QIs by correcting weakly Pareto-compliant indicators, (2) to introduce a general framework for the combination of QIs, and (3) to generate new selection mechanisms for multiobjective evolutionary algorithms where it is possible to achieve/adjust desired distributions on the Pareto front

    An Interval-based Multiobjective Approach to Feature Subset Selection Using Joint Modeling of Objectives and Variables

    Get PDF
    This paper studies feature subset selection in classification using a multiobjective estimation of distribution algorithm. We consider six functions, namely area under ROC curve, sensitivity, specificity, precision, F1 measure and Brier score, for evaluation of feature subsets and as the objectives of the problem. One of the characteristics of these objective functions is the existence of noise in their values that should be appropriately handled during optimization. Our proposed algorithm consists of two major techniques which are specially designed for the feature subset selection problem. The first one is a solution ranking method based on interval values to handle the noise in the objectives of this problem. The second one is a model estimation method for learning a joint probabilistic model of objectives and variables which is used to generate new solutions and advance through the search space. To simplify model estimation, l1 regularized regression is used to select a subset of problem variables before model learning. The proposed algorithm is compared with a well-known ranking method for interval-valued objectives and a standard multiobjective genetic algorithm. Particularly, the effects of the two new techniques are experimentally investigated. The experimental results show that the proposed algorithm is able to obtain comparable or better performance on the tested datasets
    corecore