7 research outputs found

    Probabilistic Analysis of Discrete Optimization Problems

    Get PDF
    We investigate the performance of exact algorithms for hard optimization problems under random inputs. In particular, we prove various structural properties that lead to two general average-case analyses applicable to a large class of optimization problems. In the first part we study the size of the Pareto curve for binary optimization problems with two objective functions. Pareto optimal solutions can be seen as trade-offs between multiple objectives. While in the worst case, the cardinality of the Pareto curve is exponential in the number of variables, we prove polynomial upper bounds for the expected number of Pareto points when at least one objective function is linear and exhibits sufficient randomness. Our analysis covers general probability distributions with finite mean and, in its most general form, can even handle different probability distributions for the coefficients of the objective function. We apply this result to the constrained shortest path problem and to the knapsack problem. There are algorithms for both problems that can enumerate all Pareto optimal solutions very efficiently, so that our polynomial upper bound on the size of the Pareto curve implies that the expected running time of these algorithms is polynomial as well. For example, we obtain a bound of O(n4) for uniformly random knapsack instances, where n denotes the number of available items. In the second part we investigate the performance of knapsack core algorithms, the predominant algorithmic concept in practice. The idea is to fix most variables to the values prescribed by the optimal fractional solution. The reduced problem has only polylogarithmic size on average and is solved using the Nemhauser/Ullmann algorithm. Applying the analysis of the first part, we can prove an upper bound of O(npolylog n) on the expected running time. Furthermore, we extend our analysis to a harder class of random input distributions. Finally, we present an experimental study of knapsack instances for various random input distributions. We investigate structural properties including the size of the Pareto curve and the integrality gap and compare the running time between different implementations of core algorithms. The last part of the thesis introduces a semi-random input model for constrained binary optimization problems, which enables us to perform a smoothed analysis for a large class of optimization problems while at the same time taking care of the combinatorial structure of individual problems. Our analysis is centered around structural properties, called winner, loser, and feasibility gap. These gaps describe the sensitivity of the optimal solution to slight perturbations of the input and can be used to bound the necessary accuracy as well as the complexity for solving an instance. We exploit the gaps in form of an adaptive rounding scheme increasing the accuracy of calculation until the optimal solution is found. The strength of our techniques is illustrated by applications to various NP-hard optimization problems for which we obtain the rst algorithms with polynomial average-case/smoothed complexity

    The smoothed number of {P}areto-optimal solutions in bicriteria integer optimization

    Get PDF

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Smoothed Analysis of Selected Optimization Problems and Algorithms

    Get PDF
    Optimization problems arise in almost every field of economics, engineering, and science. Many of these problems are well-understood in theory and sophisticated algorithms exist to solve them efficiently in practice. Unfortunately, in many cases the theoretically most efficient algorithms perform poorly in practice. On the other hand, some algorithms are much faster than theory predicts. This discrepancy is a consequence of the pessimism inherent in the framework of worst-case analysis, the predominant analysis concept in theoretical computer science. We study selected optimization problems and algorithms in the framework of smoothed analysis in order to narrow the gap between theory and practice. In smoothed analysis, an adversary specifies the input, which is subsequently slightly perturbed at random. As one example we consider the successive shortest path algorithm for the minimumcost flow problem. While in the worst case the successive shortest path algorithm takes exponentially many steps to compute a minimum-cost flow, we show that its running time is polynomial in the smoothed setting. Another problem studied in this thesis is makespan minimization for scheduling with related machines. It seems to be unlikely that there exist fast algorithms to solve this problem exactly. This is why we consider three approximation algorithms: the jump algorithm, the lex-jump algorithm, and the list scheduling algorithm. In the worst case, the approximation guarantees of these algorithms depend on the number of machines. We show that there is no such dependence in smoothed analysis. We also apply smoothed analysis to multicriteria optimization problems. In particular, we consider integer optimization problems with several linear objectives that have to be simultaneously minimized. We derive a polynomial upper bound for the size of the set of Pareto-optimal solutions contrasting the exponential worst-case lower bound. As the icing on the cake we find that the insights gained from our smoothed analysis of the running time of the successive shortest path algorithm lead to the design of a randomized algorithm for finding short paths between two given vertices of a polyhedron. We see this result as an indication that, in future, smoothed analysis might also result in the development of fast algorithms.Optimierungsprobleme treten in allen wirtschaftlichen, naturwissenschaftlichen und technischen Gebieten auf. Viele dieser Probleme sind ausführlich untersucht und aus praktischer Sicht effizient lösbar. Leider erweisen sich in vielen Fällen die theoretisch effizientesten Algorithmen in der Praxis als ungeeignet. Auf der anderen Seite sind einige Algorithmen viel schneller als die Theorie vorhersagt. Dieser scheinbare Widerspruch resultiert aus dem Pessimismus, der dem in der theoretischen Informatik vorherrschenden Analysekonzept, der Worst-Case-Analyse, innewohnt. Um die Lücke zwischen Theorie und Praxis zu verkleinern, untersuchen wir ausgewählte Optimierungsprobleme und Algorithmen auf gegnerisch vorgegebenen Instanzen, die durch ein leichtes Zufallsrauschen gestört werden. Solche perturbierten Instanzen bezeichnen wir als semi-zufällige Eingaben. Als Beispiel betrachten wir den Successive- Shortest-Path-Algorithmus für das Minimum-Cost-Flow-Problem. Während dieser Algorithmus imWorst Case exponentiell viele Schritte benötigt, um einen Minimum-Cost-Flow zu berechnen, zeigen wir, dass seine Laufzeit auf semi-zufälligen Eingaben polynomiell ist. Ein weiteres Problem, das wir in dieser Arbeit untersuchen, ist die Minimierung des Makespans für Scheduling auf unterschiedlich schnellen Maschinen. Es scheint, dass dieses Problem nicht effizient gelöst werden kann. Daher betrachten wir drei Approximationsalgorithmen: den Jump-, den Lex-Jump- und den List-Scheduling-Algorithmus. Im Worst Case hängt die Approximationsgüte dieser Algorithmen von der Anzahl der Maschinen ab. Wir zeigen, dass das auf semi-zufälligen Eingaben nicht der Fall ist. Des Weiteren betrachten wir ganzzahlige Optimierungsprobleme mit mehreren linearen Zielfunktionen, die simultan minimiert werden sollen. Wir leiten eine polynomielle obere Schranke für die Größe der Pareto-Menge auf semi-zufälligen Eingaben her, die im Gegensatz zu der exponentiellen unteren Worst-Case-Schranke steht. Mit den Erkenntnissen aus der Laufzeitanalyse des Successive-Shortest-Path-Algorithmus entwerfen wir einen randomisierten Algorithmus zur Bestimmung eines kurzen Pfades zwischen zwei gegebenen Ecken eines Polyeders. Wir betrachten dieses Ergebnis als ein Indiz dafür, dass in Zukunft Analysen auf semi-zufälligen Eingaben auch zu der Entwicklung schneller Algorithmen führen könnten
    corecore