101 research outputs found

    Approximating the least hypervolume contributor: NP-hard in general, but fast in practice

    Get PDF
    The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most (1+\eps) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P=NPP = NP) nor to approximate it (unless NP=BPPNP = BPP). Nevertheless, in the second part of the paper we present a fast approximation algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0 it calculates a solution with contribution at most (1+\eps) times the minimal contribution with probability at least (1δ)(1-\delta). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc

    Seeding the Initial Population of Multi-Objective Evolutionary Algorithms: A Computational Study

    Full text link
    Most experimental studies initialize the population of evolutionary algorithms with random genotypes. In practice, however, optimizers are typically seeded with good candidate solutions either previously known or created according to some problem-specific method. This "seeding" has been studied extensively for single-objective problems. For multi-objective problems, however, very little literature is available on the approaches to seeding and their individual benefits and disadvantages. In this article, we are trying to narrow this gap via a comprehensive computational study on common real-valued test functions. We investigate the effect of two seeding techniques for five algorithms on 48 optimization problems with 2, 3, 4, 6, and 8 objectives. We observe that some functions (e.g., DTLZ4 and the LZ family) benefit significantly from seeding, while others (e.g., WFG) profit less. The advantage of seeding also depends on the examined algorithm

    On the Construction of Pareto-Compliant Combined Indicators

    Get PDF
    The most relevant property that a quality indicator (QI) is expected to have is Pareto compliance, which means that every time an approximation set strictly dominates another in a Pareto sense, the indicator must reflect this. The hypervolume indicator and its variants are the only unary QIs known to be Pareto-compliant but there are many commonly used weakly Pareto-compliant indicators such as R2, IGD+,andɛ+. Currently, an open research area is related to finding new Pareto-compliant indicators whose preferences are different from those of the hypervolume indicator. In this article, we propose a theoretical basis to combine existing weakly Pareto-compliant indicators with at least one being Pareto-compliant, such that the resulting combined indicator is Pareto-compliant as well. Most importantly, we show that the combination of Paretocompliant QIs with weakly Pareto-compliant indicators leads to indicators that inherit properties of the weakly compliant indicators in terms of optimal point distributions. The consequences of these new combined indicators are threefold: (1) to increase the variety of available Pareto-compliant QIs by correcting weakly Pareto-compliant indicators, (2) to introduce a general framework for the combination of QIs, and (3) to generate new selection mechanisms for multiobjective evolutionary algorithms where it is possible to achieve/adjust desired distributions on the Pareto front

    MHACO : a multi-objective hypervolume-based ant colony optimizer for apace trajectory optimization

    Get PDF
    In this paper, we combine the concepts of hypervolume, ant colony optimization and nondominated sorting to develop a novel multi-objective ant colony optimizer for global space trajectory optimization. In particular, this algorithm is first tested on three space trajectory bi-objective test problems: an Earth-Mars transfer, an Earth-Venus transfer and a bi-objective version of the Jupiter Icy Moons Explorer mission (the first large-class mission of the European Space Agency’s Cosmic Vision 2015-2025 programme). Finally, the algorithm is applied to a four-objectives low-thrust problem that describes the journey of a solar sail towards a polar orbit around the Sun. The results on both the test cases and the more complex problem are reported by comparing the novel algorithm performances with those of two popular multi-objective optimizers (i.e., a nondominated sorting genetic algorithm and a multi-objective evolutionary algorithm with decomposition) in terms of hypervolume metric. The numerical results of this study show that the multi-objective hypervolume-based ant colony optimization algorithm is not only competitive with the standard multi-objective algorithms when applied to the space trajectory test cases, but it can also provide better Pareto fronts in terms of hypervolume values when applied to the complex solar sailing mission

    Otimização multi-objetivo envolvendo aproximadores de função via processos gaussianos e algoritmos híbridos que empregam otimização direta do hipervolume

    Get PDF
    Orientador: Fernando José Von ZubenTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: O principal propósito desta tese é reduzir a lacuna entre otimização mono-objetivo e multiobjetivo e mostrar que conectar técnicas de lados opostos pode gerar melhores resultados. Para atingir esta meta, nós fornecemos contribuições em três direções. Primeiro, mostra-se a conexão entre otimalidade da perda média e do hipervolume quando avaliando uma única solução, provando limites de otimalidade quando a solução de um é aplicada ao outro. Ademais, uma avaliação do gradiente do hipervolume mostra que ele pode ser interpretado como um caso particular da perda média ponderada, onde os pesos aumentam conforme as perdas associadas aumentam. Levantou-se a hipótese de que isto pode ajudar a treinar modelos de aprendizado de máquina, uma vez que amostras com erro alto também terão peso alto. Um experimento com uma rede neural valida a hipótese, mostrando melhor desempenho. Segundo, avaliaram-se tentativas anteriores de usar otimização do hipervolume baseada em gradiente para resolver problemas multi-objetivo e por que elas falharam. Baseado na análise, foi proposto um algoritmo híbrido que combina otimização evolutiva e baseada em gradiente. Experimentos nas funções de benchmark ZDT mostram melhor desempenho e convergência mais rápida comparado a algoritmos evolutivos de referência. Finalmente, foram apresentadas condições necessárias e suficientes para que uma função descreva uma fronteira de Pareto válida. Com base nestes resultados, adaptou-se um processo Gaussiano para penalizar violações das condições e mostrou-se que ele fornece melhores estimativas do que outros algoritmos de aproximação. Em particular, ele cria uma curva que não viola as restrições tanto quanto algoritmos que não consideram as condições, sendo mais confiável como um indicador de performance. Foi também demonstrado que uma métrica de otimização comum, quando aproximando funções com processos Gaussianos, é uma boa indicadora das regiões que um algoritmo deveria explorar para encontrar a fronteira de ParetoAbstract: The main purpose of this thesis is to bridge the gap between single-objective and multi- objective optimization and to show that connecting techniques from both ends can lead to improved results. To reach this goal, we provide contributions in three directions. First, we show the connection between optimality of a mean loss and the hypervolume when evaluating a single solution, proving optimality bounds when the solution from one is applied to the other. Furthermore, an evaluation of the gradient of the hypervolume shows that it can be interpreted as a particular case of the weighted mean loss, where the weights increase as their associated losses increases. We hypothesize that this can help to train a machine learning model, since samples with high error will also have high weight. An experiment with a neural network validates the hypothesis, showing improved performance. Second, we evaluate previous attempts at using gradient-based hypervolume optimization to solve multi-objective problems and why they have failed. Based on the analysis, we propose a hybrid algorithm that combines gradient-based and evolutionary optimization. Experiments on the benchmark functions ZDT show improved performance and faster convergence compared with reference evolutionary algorithms. Finally, we prove necessary and sufficient conditions for a function to describe a valid Pareto frontier. Based on this result, we adapt a Gaussian process to penalize violation of the conditions and show that it provides better estimates than other approximation algorithms. In particular, it creates a curve that does not violate the constraints as much as done by algorithms that do not consider the restrictions, being a more reliable performance indicator. We also show that a common optimization metric when approximating functions with Gaussian processes is a good indicator of the regions an algorithm should explore to find the Pareto frontierDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétrica2015/09199-0CAPESFAPES
    corecore