52 research outputs found

    PSA based multi objective evolutionary algorithms

    Get PDF
    It has generally been acknowledged that both proximity to the Pareto front and a certain diversity along the front, should be targeted when using evolutionary multiobjective optimization. Recently, a new partitioning mechanism, the Part and Select Algorithm (PSA), has been introduced. It was shown that this partitioning allows for the selection of a well-diversified set out of an arbitrary given set, while maintaining low computational cost. When embedded into an evolutionary search (NSGA-II), the PSA has significantly enhanced the exploitation of diversity. In this paper, the ability of the PSA to enhance evolutionary multiobjective algorithms (EMOAs) is further investigated. Two research directions are explored here. The first one deals with the integration of the PSA within an EMOA with a novel strategy. Contrary to most EMOAs, that give a higher priority to proximity over diversity, this new strategy promotes the balance between the two. The suggested algorithm allows some dominated solutions to survive, if they contribute to diversity. It is shown that such an approach substantially reduces the risk of the algorithm to fail in finding the Pareto front. The second research direction explores the use of the PSA as an archiving selection mechanism, to improve the averaged Hausdorff distance obtained by existing EMOAs. It is shown that the integration of the PSA into NSGA-II-I and Δ p -EMOA as an archiving mechanism leads to algorithms that are superior to base EMOAS on problems with disconnected Pareto fronts. © 2014 Springer International Publishing Switzerland

    On the Effect of the Cooperation of Indicator-Based Multiobjective Evolutionary Algorithms

    Get PDF
    For almost 20 years, quality indicators (QIs) have promoted the design of new selection mechanisms of multiobjective evolutionary algorithms (MOEAs). Each indicator-based MOEA (IB-MOEA) has specific search preferences related to its baseline QI, producing Pareto front approximations with different properties. In consequence, an IB-MOEA based on a single QI has a limited scope of multiobjective optimization problems (MOPs) in which it is expected to have a good performance. This issue is emphasized when the associated Pareto front geometries are highly irregular. In order to overcome these issues, we propose here an island-based multiindicator algorithm (IMIA) that takes advantage of the search biases of multiple IB-MOEAs through a cooperative scheme. Our experimental results show that the cooperation of multiple IB-MOEAs allows IMIA to perform more robustly (considering several QIs) than the panmictic versions of its baseline IB-MOEAs as well as several state-of-the-art MOEAs. Additionally, IMIA shows a Pareto-front-shape invariance property, which makes it a remarkable optimizer when tackling MOPs with complex Pareto front geometries

    Evolutionary Algorithms for Static and Dynamic Multiobjective Optimization

    Get PDF
    Many real-world optimization problems consist of a number of conflicting objectives that have to be optimized simultaneously. Due to the presence of multiple conflicting ob- jectives, there is no single solution that can optimize all the objectives. Therefore, the resulting multiobjective optimization problems (MOPs) resort to a set of trade-off op- timal solutions, called the Pareto set in the decision space and the Pareto front in the objective space. Traditional optimization methods can at best find one solution in a sin- gle run, thereby making them inefficient to solve MOPs. In contrast, evolutionary algo- rithms (EAs) are able to approximate multiple optimal solutions in a single run. This strength makes EAs good candidates for solving MOPs. Over the past several decades, there have been increasing research interests in developing EAs or improving their perfor- mance, resulting in a large number of contributions towards the applicability of EAs for MOPs. However, the performance of EAs depends largely on the properties of the MOPs in question, e.g., static/dynamic optimization environments, simple/complex Pareto front characteristics, and low/high dimensionality. Different problem properties may pose dis- tinct optimization difficulties to EAs. For example, dynamic (time-varying) MOPs are generally more challenging than static ones to EAs. Therefore, it is not trivial to further study EAs in order to make them widely applicable to MOPs with various optimization scenarios or problem properties. This thesis is devoted to exploring EAs’ ability to solve a variety of MOPs with dif- ferent problem characteristics, attempting to widen EAs’ applicability and enhance their general performance. To start with, decomposition-based EAs are enhanced by incorpo- rating two-phase search and niche-guided solution selection strategies so as to make them suitable for solving MOPs with complex Pareto fronts. Second, new scalarizing functions are proposed and their impacts on evolutionary multiobjective optimization are exten- sively studied. On the basis of the new scalarizing functions, an efficient decomposition- based EA is introduced to deal with a class of hard MOPs. Third, a diversity-first- and-convergence-second sorting method is suggested to handle possible drawbacks of convergence-first based sorting methods. The new sorting method is then combined with strength based fitness assignment, with the aid of reference directions, to optimize MOPs with an increase of objective dimensionality. After that, we study the field of dynamic multiobjective optimization where objective functions and constraints can change over time. A new set of test problems consisting of a wide range of dynamic characteristics is introduced at an attempt to standardize test environments in dynamic multiobjective optimization, thereby aiding fair algorithm comparison and deep performance analysis. Finally, a dynamic EA is developed to tackle dynamic MOPs by exploiting the advan- tages of both generational and steady-state algorithms. All the proposed approaches have been extensively examined against existing state-of-the-art methods, showing fairly good performance in a variety of test scenarios. The research work presented in the thesis is the output of initiative and novel attempts to tackle some challenging issues in evolutionary multiobjective optimization. This re- search has not only extended the applicability of some of the existing approaches, such as decomposition-based or Pareto-based algorithms, for complex or hard MOPs, but also contributed to moving forward research in the field of dynamic multiobjective optimiza- tion with novel ideas including new test suites and novel algorithm design

    On the use of hypervolume for diversity measurement of Pareto front approximations

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.In multiobjective optimization, a good quality indicator is of great importance to the performance assessment of algorithms. This paper investigates the effectiveness of the widely-used hypervolume indicator, which is the only one found so far to strictly comply with the Pareto dominance. While hypervolume is of undisputed success to assess the quality of an approximation, it is sensitive to misleading cases, particularly for diversity assessment. To address this issue, this paper presents a modified hypervolume indicator based on linear projection for diversity evaluation. In addition to experimental studies to demonstrate the effectiveness of the proposed indicator, the indicator is introduced into the environmental selecction of an indicator-based multiobjective optimization evolutionary algorithm. Experiments show that the proposed indicator yields more evenly-distributed approximations than the original hypervolume indicator

    Quality Measures of Parameter Tuning for Aggregated Multi-Objective Temporal Planning

    Get PDF
    Parameter tuning is recognized today as a crucial ingredient when tackling an optimization problem. Several meta-optimization methods have been proposed to find the best parameter set for a given optimization algorithm and (set of) problem instances. When the objective of the optimization is some scalar quality of the solution given by the target algorithm, this quality is also used as the basis for the quality of parameter sets. But in the case of multi-objective optimization by aggregation, the set of solutions is given by several single-objective runs with different weights on the objectives, and it turns out that the hypervolume of the final population of each single-objective run might be a better indicator of the global performance of the aggregation method than the best fitness in its population. This paper discusses this issue on a case study in multi-objective temporal planning using the evolutionary planner DaE-YAHSP and the meta-optimizer ParamILS. The results clearly show how ParamILS makes a difference between both approaches, and demonstrate that indeed, in this context, using the hypervolume indicator as ParamILS target is the best choice. Other issues pertaining to parameter tuning in the proposed context are also discussed.Comment: arXiv admin note: substantial text overlap with arXiv:1305.116

    DYNAMIC MULTIOBJECTIVE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Pessimistic Off-Policy Multi-Objective Optimization

    Full text link
    Multi-objective optimization is a type of decision making problems where multiple conflicting objectives are optimized. We study offline optimization of multi-objective policies from data collected by an existing policy. We propose a pessimistic estimator for the multi-objective policy values that can be easily plugged into existing formulas for hypervolume computation and optimized. The estimator is based on inverse propensity scores (IPS), and improves upon a naive IPS estimator in both theory and experiments. Our analysis is general, and applies beyond our IPS estimators and methods for optimizing them. The pessimistic estimator can be optimized by policy gradients and performs well in all of our experiments
    corecore