606 research outputs found

    Lower Bounds for the Average and Smoothed Number of Pareto Optima

    Get PDF
    Smoothed analysis of multiobjective 0-1 linear optimization has drawn considerable attention recently. The number of Pareto-optimal solutions (i.e., solutions with the property that no other solution is at least as good in all the coordinates and better in at least one) for multiobjective optimization problems is the central object of study. In this paper, we prove several lower bounds for the expected number of Pareto optima. Our basic result is a lower bound of \Omega_d(n^(d-1)) for optimization problems with d objectives and n variables under fairly general conditions on the distributions of the linear objectives. Our proof relates the problem of lower bounding the number of Pareto optima to results in geometry connected to arrangements of hyperplanes. We use our basic result to derive (1) To our knowledge, the first lower bound for natural multiobjective optimization problems. We illustrate this for the maximum spanning tree problem with randomly chosen edge weights. Our technique is sufficiently flexible to yield such lower bounds for other standard objective functions studied in this setting (such as, multiobjective shortest path, TSP tour, matching). (2) Smoothed lower bound of min {\Omega_d(n^(d-1.5) \phi^{(d-log d) (1-\Theta(1/\phi))}), 2^{\Theta(n)}}$ for the 0-1 knapsack problem with d profits for phi-semirandom distributions for a version of the knapsack problem. This improves the recent lower bound of Brunsch and Roeglin

    Smoothed Complexity Theory

    Get PDF
    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like P and AvgP, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first hardness results (of bounded halting and tiling) and tractability results (binary optimization problems, graph coloring, satisfiability). Furthermore, we discuss extensions and shortcomings of our model and relate it to semi-random models.Comment: to be presented at MFCS 201

    Towards explaining the speed of kk-means

    Get PDF
    The kk-means method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worst-case running-time. To explain the speed of the kk-means method, a smoothed analysis has been conducted. We sketch this smoothed analysis and a generalization to Bregman divergences

    The smoothed number of {P}areto-optimal solutions in bicriteria integer optimization

    Get PDF

    Heterogeneity and learning with complete markets

    Get PDF
    We study an endowment economy with complete markets and heterogeneous agents who do not have rational expectations, but form their beliefs using adaptive learning algorithms that may differ from one individual to another. We show that market completeness allows agents to smooth consumption across states of nature, but not across time, and that the initial wealth distribution is not enough to pin down the long-run equilibrium. Consequently, initial differences in beliefs create persistent consumption imbalances that are not grounded in fundamentals. In some cases these imbalances are eventually unsustainable: the debt of one of the agents would grow without bounds, and binding borrowing limits are necessary to prevent Ponzi schemes. Finally, we find that our slight departure from rational expectations affects efficiency properties of the competitive equilibrium: if the social welfare function attaches fixed Pareto weights to the different individuals, there are configurations of individual expectations under which society is better off with financial autarky than with complete markets. The first best can be restored by introducing a distortionary tax on borrowing, which transfers consumption from the more optimistic agent to the other.learning, heterogeneous agents, complete markets

    Smoothed Analysis of Selected Optimization Problems and Algorithms

    Get PDF
    Optimization problems arise in almost every field of economics, engineering, and science. Many of these problems are well-understood in theory and sophisticated algorithms exist to solve them efficiently in practice. Unfortunately, in many cases the theoretically most efficient algorithms perform poorly in practice. On the other hand, some algorithms are much faster than theory predicts. This discrepancy is a consequence of the pessimism inherent in the framework of worst-case analysis, the predominant analysis concept in theoretical computer science. We study selected optimization problems and algorithms in the framework of smoothed analysis in order to narrow the gap between theory and practice. In smoothed analysis, an adversary specifies the input, which is subsequently slightly perturbed at random. As one example we consider the successive shortest path algorithm for the minimumcost flow problem. While in the worst case the successive shortest path algorithm takes exponentially many steps to compute a minimum-cost flow, we show that its running time is polynomial in the smoothed setting. Another problem studied in this thesis is makespan minimization for scheduling with related machines. It seems to be unlikely that there exist fast algorithms to solve this problem exactly. This is why we consider three approximation algorithms: the jump algorithm, the lex-jump algorithm, and the list scheduling algorithm. In the worst case, the approximation guarantees of these algorithms depend on the number of machines. We show that there is no such dependence in smoothed analysis. We also apply smoothed analysis to multicriteria optimization problems. In particular, we consider integer optimization problems with several linear objectives that have to be simultaneously minimized. We derive a polynomial upper bound for the size of the set of Pareto-optimal solutions contrasting the exponential worst-case lower bound. As the icing on the cake we find that the insights gained from our smoothed analysis of the running time of the successive shortest path algorithm lead to the design of a randomized algorithm for finding short paths between two given vertices of a polyhedron. We see this result as an indication that, in future, smoothed analysis might also result in the development of fast algorithms.Optimierungsprobleme treten in allen wirtschaftlichen, naturwissenschaftlichen und technischen Gebieten auf. Viele dieser Probleme sind ausführlich untersucht und aus praktischer Sicht effizient lösbar. Leider erweisen sich in vielen Fällen die theoretisch effizientesten Algorithmen in der Praxis als ungeeignet. Auf der anderen Seite sind einige Algorithmen viel schneller als die Theorie vorhersagt. Dieser scheinbare Widerspruch resultiert aus dem Pessimismus, der dem in der theoretischen Informatik vorherrschenden Analysekonzept, der Worst-Case-Analyse, innewohnt. Um die Lücke zwischen Theorie und Praxis zu verkleinern, untersuchen wir ausgewählte Optimierungsprobleme und Algorithmen auf gegnerisch vorgegebenen Instanzen, die durch ein leichtes Zufallsrauschen gestört werden. Solche perturbierten Instanzen bezeichnen wir als semi-zufällige Eingaben. Als Beispiel betrachten wir den Successive- Shortest-Path-Algorithmus für das Minimum-Cost-Flow-Problem. Während dieser Algorithmus imWorst Case exponentiell viele Schritte benötigt, um einen Minimum-Cost-Flow zu berechnen, zeigen wir, dass seine Laufzeit auf semi-zufälligen Eingaben polynomiell ist. Ein weiteres Problem, das wir in dieser Arbeit untersuchen, ist die Minimierung des Makespans für Scheduling auf unterschiedlich schnellen Maschinen. Es scheint, dass dieses Problem nicht effizient gelöst werden kann. Daher betrachten wir drei Approximationsalgorithmen: den Jump-, den Lex-Jump- und den List-Scheduling-Algorithmus. Im Worst Case hängt die Approximationsgüte dieser Algorithmen von der Anzahl der Maschinen ab. Wir zeigen, dass das auf semi-zufälligen Eingaben nicht der Fall ist. Des Weiteren betrachten wir ganzzahlige Optimierungsprobleme mit mehreren linearen Zielfunktionen, die simultan minimiert werden sollen. Wir leiten eine polynomielle obere Schranke für die Größe der Pareto-Menge auf semi-zufälligen Eingaben her, die im Gegensatz zu der exponentiellen unteren Worst-Case-Schranke steht. Mit den Erkenntnissen aus der Laufzeitanalyse des Successive-Shortest-Path-Algorithmus entwerfen wir einen randomisierten Algorithmus zur Bestimmung eines kurzen Pfades zwischen zwei gegebenen Ecken eines Polyeders. Wir betrachten dieses Ergebnis als ein Indiz dafür, dass in Zukunft Analysen auf semi-zufälligen Eingaben auch zu der Entwicklung schneller Algorithmen führen könnten

    The value of coordination in a two echelon supply chain: Sharing information, policies and parameters.

    Get PDF
    We study a coordination scheme in a two echelon supply chain. It involves sharing details of replenishment rules, lead-times, demand patterns and tuning the replenishment rules to exploit the supply chain's cost structure. We examine four different coordination strategies; naĂŻve operation, local optimisation, global optimisation and altruistic behaviour on behalf of the retailer. We assume the retailer and the manufacturer use the Order-Up-To policy to determine replenishment orders and end consumers demand is a stationary i.i.d. random variable. We derive the variance of the retailer's order rate and inventory levels and the variance of the manufacturer's order rate and inventory levels. We initially assume that costs in the supply chain are directly proportional to these variances (and later the standard deviations) and investigate the options available to the supply chain members for minimising costs. Our results show that if the retailer takes responsibility for supply chain cost reduction and acts altruistically by dampening his order variability, then the performance enhancement is robust to both the actual costs in the supply chain and to a naĂŻve or uncooperative manufacturer. Superior performance is achievable if firms coordinate their actions and if they find ways to re-allocate the supply chain gain.Bullwhip; Global optimisation; Inventory variance; Local optimisation; Supply chains; Studies; Coordination; Supply chain; IT; Replenishment rule; Rules; Demand; Patterns; Cost; Structure; Strategy; Retailer; Policy; Order; Variance; Inventory; Costs; Options; Variability; Performance; Performance enhancement; Firms;
    • …
    corecore