47 research outputs found

    Lower Bounds for the Average and Smoothed Number of Pareto Optima

    Get PDF
    Smoothed analysis of multiobjective 0-1 linear optimization has drawn considerable attention recently. The number of Pareto-optimal solutions (i.e., solutions with the property that no other solution is at least as good in all the coordinates and better in at least one) for multiobjective optimization problems is the central object of study. In this paper, we prove several lower bounds for the expected number of Pareto optima. Our basic result is a lower bound of \Omega_d(n^(d-1)) for optimization problems with d objectives and n variables under fairly general conditions on the distributions of the linear objectives. Our proof relates the problem of lower bounding the number of Pareto optima to results in geometry connected to arrangements of hyperplanes. We use our basic result to derive (1) To our knowledge, the first lower bound for natural multiobjective optimization problems. We illustrate this for the maximum spanning tree problem with randomly chosen edge weights. Our technique is sufficiently flexible to yield such lower bounds for other standard objective functions studied in this setting (such as, multiobjective shortest path, TSP tour, matching). (2) Smoothed lower bound of min {\Omega_d(n^(d-1.5) \phi^{(d-log d) (1-\Theta(1/\phi))}), 2^{\Theta(n)}}$ for the 0-1 knapsack problem with d profits for phi-semirandom distributions for a version of the knapsack problem. This improves the recent lower bound of Brunsch and Roeglin

    The smoothed number of {P}areto-optimal solutions in bicriteria integer optimization

    Get PDF

    Probabilistic Analysis of Discrete Optimization Problems

    Get PDF
    We investigate the performance of exact algorithms for hard optimization problems under random inputs. In particular, we prove various structural properties that lead to two general average-case analyses applicable to a large class of optimization problems. In the first part we study the size of the Pareto curve for binary optimization problems with two objective functions. Pareto optimal solutions can be seen as trade-offs between multiple objectives. While in the worst case, the cardinality of the Pareto curve is exponential in the number of variables, we prove polynomial upper bounds for the expected number of Pareto points when at least one objective function is linear and exhibits sufficient randomness. Our analysis covers general probability distributions with finite mean and, in its most general form, can even handle different probability distributions for the coefficients of the objective function. We apply this result to the constrained shortest path problem and to the knapsack problem. There are algorithms for both problems that can enumerate all Pareto optimal solutions very efficiently, so that our polynomial upper bound on the size of the Pareto curve implies that the expected running time of these algorithms is polynomial as well. For example, we obtain a bound of O(n4) for uniformly random knapsack instances, where n denotes the number of available items. In the second part we investigate the performance of knapsack core algorithms, the predominant algorithmic concept in practice. The idea is to fix most variables to the values prescribed by the optimal fractional solution. The reduced problem has only polylogarithmic size on average and is solved using the Nemhauser/Ullmann algorithm. Applying the analysis of the first part, we can prove an upper bound of O(npolylog n) on the expected running time. Furthermore, we extend our analysis to a harder class of random input distributions. Finally, we present an experimental study of knapsack instances for various random input distributions. We investigate structural properties including the size of the Pareto curve and the integrality gap and compare the running time between different implementations of core algorithms. The last part of the thesis introduces a semi-random input model for constrained binary optimization problems, which enables us to perform a smoothed analysis for a large class of optimization problems while at the same time taking care of the combinatorial structure of individual problems. Our analysis is centered around structural properties, called winner, loser, and feasibility gap. These gaps describe the sensitivity of the optimal solution to slight perturbations of the input and can be used to bound the necessary accuracy as well as the complexity for solving an instance. We exploit the gaps in form of an adaptive rounding scheme increasing the accuracy of calculation until the optimal solution is found. The strength of our techniques is illustrated by applications to various NP-hard optimization problems for which we obtain the rst algorithms with polynomial average-case/smoothed complexity

    Smoothed Analysis of Selected Optimization Problems and Algorithms

    Get PDF
    Optimization problems arise in almost every field of economics, engineering, and science. Many of these problems are well-understood in theory and sophisticated algorithms exist to solve them efficiently in practice. Unfortunately, in many cases the theoretically most efficient algorithms perform poorly in practice. On the other hand, some algorithms are much faster than theory predicts. This discrepancy is a consequence of the pessimism inherent in the framework of worst-case analysis, the predominant analysis concept in theoretical computer science. We study selected optimization problems and algorithms in the framework of smoothed analysis in order to narrow the gap between theory and practice. In smoothed analysis, an adversary specifies the input, which is subsequently slightly perturbed at random. As one example we consider the successive shortest path algorithm for the minimumcost flow problem. While in the worst case the successive shortest path algorithm takes exponentially many steps to compute a minimum-cost flow, we show that its running time is polynomial in the smoothed setting. Another problem studied in this thesis is makespan minimization for scheduling with related machines. It seems to be unlikely that there exist fast algorithms to solve this problem exactly. This is why we consider three approximation algorithms: the jump algorithm, the lex-jump algorithm, and the list scheduling algorithm. In the worst case, the approximation guarantees of these algorithms depend on the number of machines. We show that there is no such dependence in smoothed analysis. We also apply smoothed analysis to multicriteria optimization problems. In particular, we consider integer optimization problems with several linear objectives that have to be simultaneously minimized. We derive a polynomial upper bound for the size of the set of Pareto-optimal solutions contrasting the exponential worst-case lower bound. As the icing on the cake we find that the insights gained from our smoothed analysis of the running time of the successive shortest path algorithm lead to the design of a randomized algorithm for finding short paths between two given vertices of a polyhedron. We see this result as an indication that, in future, smoothed analysis might also result in the development of fast algorithms.Optimierungsprobleme treten in allen wirtschaftlichen, naturwissenschaftlichen und technischen Gebieten auf. Viele dieser Probleme sind ausführlich untersucht und aus praktischer Sicht effizient lösbar. Leider erweisen sich in vielen Fällen die theoretisch effizientesten Algorithmen in der Praxis als ungeeignet. Auf der anderen Seite sind einige Algorithmen viel schneller als die Theorie vorhersagt. Dieser scheinbare Widerspruch resultiert aus dem Pessimismus, der dem in der theoretischen Informatik vorherrschenden Analysekonzept, der Worst-Case-Analyse, innewohnt. Um die Lücke zwischen Theorie und Praxis zu verkleinern, untersuchen wir ausgewählte Optimierungsprobleme und Algorithmen auf gegnerisch vorgegebenen Instanzen, die durch ein leichtes Zufallsrauschen gestört werden. Solche perturbierten Instanzen bezeichnen wir als semi-zufällige Eingaben. Als Beispiel betrachten wir den Successive- Shortest-Path-Algorithmus für das Minimum-Cost-Flow-Problem. Während dieser Algorithmus imWorst Case exponentiell viele Schritte benötigt, um einen Minimum-Cost-Flow zu berechnen, zeigen wir, dass seine Laufzeit auf semi-zufälligen Eingaben polynomiell ist. Ein weiteres Problem, das wir in dieser Arbeit untersuchen, ist die Minimierung des Makespans für Scheduling auf unterschiedlich schnellen Maschinen. Es scheint, dass dieses Problem nicht effizient gelöst werden kann. Daher betrachten wir drei Approximationsalgorithmen: den Jump-, den Lex-Jump- und den List-Scheduling-Algorithmus. Im Worst Case hängt die Approximationsgüte dieser Algorithmen von der Anzahl der Maschinen ab. Wir zeigen, dass das auf semi-zufälligen Eingaben nicht der Fall ist. Des Weiteren betrachten wir ganzzahlige Optimierungsprobleme mit mehreren linearen Zielfunktionen, die simultan minimiert werden sollen. Wir leiten eine polynomielle obere Schranke für die Größe der Pareto-Menge auf semi-zufälligen Eingaben her, die im Gegensatz zu der exponentiellen unteren Worst-Case-Schranke steht. Mit den Erkenntnissen aus der Laufzeitanalyse des Successive-Shortest-Path-Algorithmus entwerfen wir einen randomisierten Algorithmus zur Bestimmung eines kurzen Pfades zwischen zwei gegebenen Ecken eines Polyeders. Wir betrachten dieses Ergebnis als ein Indiz dafür, dass in Zukunft Analysen auf semi-zufälligen Eingaben auch zu der Entwicklung schneller Algorithmen führen könnten

    Analysis of FPTASes for the Multi-Objective Shortest Path Problem

    Get PDF
    We propose a new FPTAS for the multi-objective shortest path problem. The algorithm uses elements from both an exact labeling algorithm and an FPTAS proposed by Tsaggouris and Zaroliagis (2009). We analyze the running times of these three algorithms both from a the- oretical and a computational point of view. Theoretically, we show that there are instances for which the new FPTAS runs an arbitrary times faster than the other two algorithms. Fur- thermore, for the bi-objective case, the number of approximate solutions generated by the proposed FPTAS is at most the number of Pareto-optimal solutions multiplied by the number of nodes. By performing a set of computational tests, we show that the new FPTAS performs best in terms of running ti

    Output-sensitive complexity of multiobjective combinatorial optimization

    Get PDF
    We study output-sensitive algorithms and complexity for multiobjective combinatorial optimization problems. In this computational complexity framework, an algorithm for a general enumeration problem is regarded efficient if it is output-sensitive, i.e., its running time is bounded by a polynomial in the input and the output size. We provide both practical examples of MOCO problems for which such an efficient algorithm exists as well as problems for which no efficient algorithm exists under mild complexity theoretic assumptions

    Utilitarian Mechanism Design for Multiobjective Optimization

    Get PDF
    In a classic optimization problem, the complete input data is assumed to be known to the algorithm. This assumption may not be true anymore in optimization problems motivated by the Internet where part of the input data is private knowledge of independent selfish agents. The goal of algorithmic mechanism design is to provide (in polynomial time) a solution to the optimization problem and a set of incentives for the agents such that disclosing the input data is a dominant strategy for the agents. In the case of NP-hard problems, the solution computed should also be a good approximation of the optimum. In this paper we focus on mechanism design for multiobjective optimization problems. In this setting we are given a main objective function and a set of secondary objectives which are modeled via budget constraints. Multiobjective optimization is a natural setting for mechanism design as many economical choices ask for a compromise between different, partially conflicting goals. The main contribution of this paper is showing that two of the main tools for the design of approximation algorithms for multiobjective optimization problems, namely, approximate Pareto sets and Lagrangian relaxation, can lead to truthful approximation schemes. By exploiting the method of approximate Pareto sets, we devise truthful deterministic and randomized multicriteria fully polynomial-time approximation schemes (FPTASs) for multiobjective optimization problems whose exact version admits a pseudopolynomial-time algorithm, as, for instance, the multibudgeted versions of minimum spanning tree, shortest path, maximum (perfect) matching, and matroid intersection. Our construction also applies to multidimensional knapsack and multiunit combinatorial auctions. Our FPTASs compute a (1+ε)(1+\varepsilon)-approximate solution violating each budget constraint by a factor (1+ε)(1+\varepsilon). When feasible solutions induce an independence system, i.e., when subsets of feasible solutions are feasible as well, we present a PTAS (not violating any constraint), which combines the approach above with a novel monotone way to guess the heaviest elements in the optimum solution. Finally, we present a universally truthful Las Vegas PTAS for minimum spanning tree with a single budget constraint, where one wants to compute a minimum cost spanning tree whose length is at most a given value LL. This result is based on the Lagrangian relaxation method, in combination with our monotone guessing step and with a random perturbation step (ensuring low expected running time). This result can be derandomized in the case of integral lengths. All the mentioned results match the best known approximation ratios, which are, however, obtained by nontruthful algorithms
    corecore