8,668 research outputs found

    Worst case instances are fragile

    Get PDF
    We describe three results in this thesis. The first one is a heuristic improvement for a shortest path problem, which we termed single-source many-targets shortest path problem. In this problem, we need to compute a shortest path from a source node to a node that belongs to a designated target set. Dijkstra`s algorithm can be used to solve this problem. We are interested in the single-source many-targets shortest path problem since matching algorithms repeatedly solve this problem so as to compute a maximum weighted matching in a bipartite graph. The heuristic is easy to implement and, as our experiments show, considerably reduces the running time of the matching algorithm. We provide an average case analysis which shows that a substantial fraction of queue operations is saved by Dijkstra`s algorithm if the heuristic is used. (Corresponding paper: A heuristic for Dijkstra`s algorithm with many targets and its use in weighted matching algorithms. Algorithmica, 36(1):75-88, 2003.) The second and third result are about the extension of smoothed complexity to the area of online algorithms. Smoothed complexity has been introduced by Spielman and Teng to explain the behaviour of algorithms that perform well in practice while having a poor worst case complexity. The idea is to add some noise to the initial input instances by perturbing the input values slightly at random and to analyze the performance of the algorithm on these perturbed instances. In this work, we apply this notion to two well-known online algorithms. The first one is the multi-level feedback algorithm (MLF), minimizing the average flow time on a sequence of jobs released over time, when the processing times of these jobs are not known. MLF is known to work very well in practice, though it has a poor competitive ratio. As it turns out, the smoothed competitive ratio of MLF improves exponentially with the amount of random noise that is added; on average, MLF even admits a constant competitive ratio. We also prove that our bound is asymptotically tight. (Corresponding paper: Average case and smoothed competitive analysis of the multi-level feedback algorithm. In Proceedings of the Forty-Fourth Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 462-471, 2003.) The second algorithm that we consider is the work function algorithm (WFA) for metrical task systems, a general framework to model online problems. It is known that WFA has a poor competitive ratio. We believe that due to its generality it is interesting to analyze the smoothed competitive ratio of WFA. Our analysis reveals that the smoothed competitive ratio of WFA is much better than its worst case competitive ratio and that it depends on certain topological parameters of the underlying metric. We present asymptotic upper and matching lower bounds on the smoothed competitive ratio of WFA.In der vorliegenden Arbeit werden drei Resultate vorgestellt. Als erstes beschreiben wir eine Heuristik fĂŒr eine Variante des kĂŒrzesten Wege Problems, welches wir das Single-Source Many-Targets Shortest Path Problem nennen. Gegeben sind ein ungerichteter Graph mit nichtnegativen Kantenkosten, ein Quellknoten s und eine Menge T von Zielknoten. Die Aufgabe ist es, einen kĂŒrzesten Weg vom Quellknoten s zu einem der Zielknoten in T zu berechnen. Dieses Problem wird wiederholt von Matching Algorithmen gelöst, um ein maximal gewichtetes Matching in bipartiten Graphen zu berechnen. Der Algorithmus von Dijkstra kann verwendet werden, um das Single-Source Many-Targets Shortest Path Problem zu lösen. Unsere Heuristik lĂ€sst sich leicht implementieren und erzielt, wie unsere Experimente zeigen, eine signifikante Laufzeitverbesserung des Matching Algorithmus. In den Experimenten auf Zufallsgraphen konnten wir eine Laufzeitverbesserung von bis zu einem Faktor 12 beobachten. Wir prĂ€sentieren eine Average Case Analyse, in der wir zeigen, dass die Heuristik auf Zufallsinstanzen eine nicht unerhebliche Anzahl von Operationen in der AusfĂŒhrung von Dijkstra’s Algorithmus einspart. Im zweiten Teil der Arbeit erweitern wir die kĂŒrzlich von Spielman und Teng eingefĂŒhrte Smoothed Complexity auf den Bereich der online Algorithmen. Die Smoothed Complexity ist ein neues KomplexitĂ€tsmaß, mit dem man versucht, die Effizienz eines Algorithmus in der Praxis in adĂ€quater Weise zu reprĂ€sentieren. Die grundlegende Idee ist, die Eingabeinstanzen mehr oder weniger stark zufĂ€llig zu perturbieren, d. h. zu stören, und die Effizienz eines Algorithmus anhand seiner erwarteten Laufzeit auf diesen perturbierten Instanzen festzumachen. Im allgemeinen ist die Smoothed Complexity eines Algorithmus sehr viel geringer als seine Worst Case Complexity, wenn die Worst Case Instanzen kĂŒnstlichen oder konstruierten Instanzen entsprechen, die in der Praxis so gut wie nie auftreten. Spielman und Teng fĂŒhrten die Smoothed Complexity im Zusammenhang mit der Laufzeit als Effizienzkriterium ein. Die zugrunde liegende Idee lĂ€sst sich jedoch auch auf andere Kriterien erweitern

    Probabilistic alternatives for competitive analysis

    Get PDF
    In the last 20 years competitive analysis has become the main tool for analyzing the quality of online algorithms. Despite of this, competitive analysis has also been criticized: it sometimes cannot discriminate between algorithms that exhibit significantly different empirical behavior or it even favors an algorithm that is worse from an empirical point of view. Therefore, there have been several approaches to circumvent these drawbacks. In this survey, we discuss probabilistic alternatives for competitive analysis.operations research and management science;

    05031 Abstracts Collection -- Algorithms for Optimization with Incomplete Information

    Get PDF
    From 16.01.05 to 21.01.05, the Dagstuhl Seminar 05031 ``Algorithms for Optimization with Incomplete Information\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Probabilistic analysis of Online Bin Coloring algorithms via Stochastic Comparison

    Get PDF
    This paper proposes a new method for probabilistic analysis of online algorithms that is based on the notion of stochastic dominance. We develop the method for the Online Bin Coloring problem introduced by Krumke et al. Using methods for the stochastic comparison of Markov chains we establish the strong result that the performance of the online algorithm GreedyFit is stochastically dominated by the performance of the algorithm OneBin for any number of items processed. This result gives a more realistic picture than competitive analysis and explains the behavior observed in simulations.mathematical applications;

    08071 Abstracts Collection -- Scheduling

    Get PDF
    From 10.02. to 15.02., the Dagstuhl Seminar 08071 ``Scheduling\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available
    • 

    corecore