267 research outputs found

    Approximating the least hypervolume contributor: NP-hard in general, but fast in practice

    Get PDF
    The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most (1+\eps) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P=NPP = NP) nor to approximate it (unless NP=BPPNP = BPP). Nevertheless, in the second part of the paper we present a fast approximation algorithm for this problem. We prove that for arbitrarily given \eps,\delta>0 it calculates a solution with contribution at most (1+\eps) times the minimal contribution with probability at least (1δ)(1-\delta). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e.g., 10000 solutions in 100 dimensions) within a few seconds.Comment: 22 pages, to appear in Theoretical Computer Scienc

    Bringing Order to Special Cases of Klee's Measure Problem

    Full text link
    Klee's Measure Problem (KMP) asks for the volume of the union of n axis-aligned boxes in d-space. Omitting logarithmic factors, the best algorithm has runtime O*(n^{d/2}) [Overmars,Yap'91]. There are faster algorithms known for several special cases: Cube-KMP (where all boxes are cubes), Unitcube-KMP (where all boxes are cubes of equal side length), Hypervolume (where all boxes share a vertex), and k-Grounded (where the projection onto the first k dimensions is a Hypervolume instance). In this paper we bring some order to these special cases by providing reductions among them. In addition to the trivial inclusions, we establish Hypervolume as the easiest of these special cases, and show that the runtimes of Unitcube-KMP and Cube-KMP are polynomially related. More importantly, we show that any algorithm for one of the special cases with runtime T(n,d) implies an algorithm for the general case with runtime T(n,2d), yielding the first non-trivial relation between KMP and its special cases. This allows to transfer W[1]-hardness of KMP to all special cases, proving that no n^{o(d)} algorithm exists for any of the special cases under reasonable complexity theoretic assumptions. Furthermore, assuming that there is no improved algorithm for the general case of KMP (no algorithm with runtime O(n^{d/2 - eps})) this reduction shows that there is no algorithm with runtime O(n^{floor(d/2)/2 - eps}) for any of the special cases. Under the same assumption we show a tight lower bound for a recent algorithm for 2-Grounded [Yildiz,Suri'12].Comment: 17 page

    Automatic Camera Control:A Dynamic Multi-Objective Perspective

    Get PDF

    Rail accessibility in Germany: Changing regional disparities between 1990 and 2020

    Get PDF
    Transport accessibility is an important location factor for households and firms. In the last few decades, technological and social developments have contributed to a reinvigorated role of passenger transport. However, rail accessibility is unevenly distributed in space. The introduction of high-speed rail has furthermore promoted a polarisation of accessibility between metropolises and peripheral areas in some European countries. In this article we analyse the development of rail accessibility at the regional level in Germany between 1990 and 2020 for 266 functional city-regions. Our results show two different facets: The number of regions that are directly connected to one another has decreased, but at the same time the spatial disparities of accessibility have decreased, albeit to a small extent. This development was strongest in East Germany after German reunification and thus largely a consequence of the renovation of the conventional rail infrastructure, not high-speed rail. Nevertheless, it can be concluded that the introduction of high-speed traffic in Germany did not lead to an increase in accessibility disparities. Instead, the accessibility effects of high-speed rail in Germany seem to break the traditional dichotomy between core and periphery.Verkehrliche Erreichbarkeit stellt einen wichtigen Standortfaktor für Haushalte und Unternehmen dar. In den letzten Jahrzehnten haben technologische und soziale Entwicklungen zu einer neuen Attraktivität des Schienenpersonenverkehrs beigetragen. Die Erreichbarkeit über den Schienenverkehr fällt jedoch räumlich sehr unterschiedlich aus. Die Einführung des Hochgeschwindigkeitsverkehrs hat zudem in einigen europäischen Ländern eine Polarisierung der Erreichbarkeit zwischen Metropolen und peripheren Räumen befördert. In diesem Beitrag analysieren wir die Entwicklung der Bahnerreichbarkeit auf regionaler Ebene in Deutschland zwischen 1990 und 2020 für 266 funktionale Stadtregionen. Unsere Ergebnisse zeigen zwei unterschiedliche Facetten: Die Zahl der direkt miteinander verbundenen Regionen hat sich verringert, aber zugleich zeigt sich für die Erreichbarkeit der Bevölkerung eine Abschwächung der räumlichen Disparitäten, wenn auch in geringem Maße. Diese Entwicklung war in Ostdeutschland nach der deutschen Wiedervereinigung am stärksten und damit weitgehend eine Folge der Sanierung der konventionellen Schieneninfrastruktur, nicht des Hochgeschwindigkeitsverkehrs. Dennoch kann der Schluss gezogen werden, dass seine Einführung in Deutschland nicht zur Erhöhung von Erreichbarkeitsdisparitäten geführt hat. Stattdessen scheinen die Erreichbarkeitswirkungen des Hochgeschwindigkeitsverkehrs in Deutschland die traditionelle Dichotomie zwischen Kern und Peripherie zu durchbrechen

    Preference Articulation by Means of the R2 Indicator

    Get PDF
    International audienceIn multi-objective optimization, set-based performance indicators have become the state of the art for assessing the quality of Pareto front approximations. As a consequence, they are also more and more used within the design of multi-objective optimization algorithms. The R2 and the Hypervolume (HV) indicator represent two popular examples. In order to understand the behavior and the approximations preferred by these indicators and algorithms, a comprehensive knowledge of the indicator's properties is required. Whereas this knowledge is available for the HV, we presented a first approach in this direction for the R2 indicator just recently. In this paper, we build upon this knowledge and enhance the considerations with respect to the integration of preferences into the R2 indicator. More specifically, we analyze the effect of the reference point, the domain of the weights, and the distribution of weight vectors on the optimization of μ solutions with respect to the R2 indicator. By means of theoretical findings and empirical evidence, we show the potentials of these three possibilities using the optimal distribution of μ solutions for exemplary setups

    Multiobjective exploration of the StarCraft map space

    Get PDF
    This paper presents a search-based method for generating maps for the popular real-time strategy (RTS) game StarCraft. We devise a representation of StarCraft maps suitable for evolutionary search, along with a set of fitness functions based on predicted entertainment value of those maps, as derived from theories of player experience. A multiobjective evolutionary algorithm is then used to evolve complete Star- Craft maps based on the representation and selected fitness functions. The output of this algorithm is a Pareto front approximation visualizing the tradeoff between the several fitness functions used, and where each point on the front represents a viable map. We argue that this method is useful for both automatic and machine-assisted map generation, and in particular that the Pareto fronts are excellent design support tools for human map designers.This research was supported in part by the Danish Research Agency, Ministry of Science, Technology and Innovation; project name: Adaptive Game Content Creation using Computational Intelligence (AGameComIn); project number: 274-09-0083.peer-reviewe

    Force-based Cooperative Search Directions in Evolutionary Multi-objective Optimization

    Get PDF
    International audienceIn order to approximate the set of Pareto optimal solutions, several evolutionary multi-objective optimization (EMO) algorithms transfer the multi-objective problem into several independent single-objective ones by means of scalarizing functions. The choice of the scalarizing functions' underlying search directions, however, is typically problem-dependent and therefore difficult if no information about the problem characteristics are known before the search process. The goal of this paper is to present new ideas of how these search directions can be computed \emph{adaptively} during the search process in a \emph{cooperative} manner. Based on the idea of Newton's law of universal gravitation, solutions attract and repel each other \emph{in the objective space}. Several force-based EMO algorithms are proposed and compared experimentally on general bi-objective ρ\rhoMNK landscapes with different objective correlations. It turns out that the new approach is easy to implement, fast, and competitive with respect to a (μ+λ)(\mu+\lambda)-SMS-EMOA variant, in particular if the objectives show strong positive or negative correlations

    Multi-Objective Optimization with an Adaptive Resonance Theory-Based Estimation of Distribution Algorithm: A Comparative Study

    Get PDF
    Proceedings of: 5th International Conference, LION 5, Rome, Italy, January 17-21, 2011.The introduction of learning to the search mechanisms of optimization algorithms has been nominated as one of the viable approaches when dealing with complex optimization problems, in particular with multi-objective ones. One of the forms of carrying out this hybridization process is by using multi-objective optimization estimation of distribution algorithms (MOEDAs). However, it has been pointed out that current MOEDAs have a intrinsic shortcoming in their model-building algorithms that hamper their performance. In this work we argue that error-based learning, the class of learning most commonly used in MOEDAs is responsible for current MOEDA underachievement. We present adaptive resonance theory (ART) as a suitable learning paradigm alternative and present a novel algorithm called multi-objective ART-based EDA (MARTEDA) that uses a Gaussian ART neural network for model-building and an hypervolume-based selector as described for the HypE algorithm. In order to assert the improvement obtained by combining two cutting-edge approaches to optimization an extensive set of experiments are carried out. These experiments also test the scalability of MARTEDA as the number of objective functions increases.This work was supported by projects CICYT TIN2008-06742-C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, CAM CONTEXTS (S2009/TIC-1485) and DPS2008-07029-C02-02.Publicad

    Multiobjective evolutionary algorithm based on vector angle neighborhood

    Get PDF
    Selection is a major driving force behind evolution and is a key feature of multiobjective evolutionary algorithms. Selection aims at promoting the survival and reproduction of individuals that are most fitted to a given environment. In the presence of multiple objectives, major challenges faced by this operator come from the need to address both the population convergence and diversity, which are conflicting to a certain extent. This paper proposes a new selection scheme for evolutionary multiobjective optimization. Its distinctive feature is a similarity measure for estimating the population diversity, which is based on the angle between the objective vectors. The smaller the angle, the more similar individuals. The concept of similarity is exploited during the mating by defining the neighborhood and the replacement by determining the most crowded region where the worst individual is identified. The latter is performed on the basis of a convergence measure that plays a major role in guiding the population towards the Pareto optimal front. The proposed algorithm is intended to exploit strengths of decomposition-based approaches in promoting diversity among the population while reducing the user's burden of specifying weight vectors before the search. The proposed approach is validated by computational experiments with state-of-the-art algorithms on problems with different characteristics. The obtained results indicate a highly competitive performance of the proposed approach. Significant advantages are revealed when dealing with problems posing substantial difficulties in keeping diversity, including many-objective problems. The relevance of the suggested similarity and convergence measures are shown. The validity of the approach is also demonstrated on engineering problems.This work was supported by the Portuguese Fundacao para a Ciencia e Tecnologia under grant PEst-C/CTM/LA0025/2013 (Projecto Estrategico - LA 25 - 2013-2014 - Strategic Project - LA 25 - 2013-2014).info:eu-repo/semantics/publishedVersio
    corecore