6 research outputs found

    On efficiency and reliability in computer science

    Get PDF
    Efficiency of algorithms and robustness against mistakes in their implementation or uncertainties in their input has always been of central interest in computer science. This thesis presents results for a number of problems related to this topic. Certifying algorithms enable reliable implementations by providing a certificate with their answer. A simple program can check the answers using the certificates. If the the checker accepts, the answer of the complex program is correct. The user only has to trust the simple checker. We present a novel certifying algorithm for 3-edge-connectivity as well as a simplified certifying algorithm for 3-vertex-connectivity. Occasionally storing the state of computations, so called checkpointing, also helps with reliability since we can recover from errors without having to restart the computation. In this thesis we show how to do checkpointing with bounded memory and present several strategies to minimize the worst-case recomputation. In theory, the input for problems is accurate and well-defined. However, in practice it often contains uncertainties necessitating robust solutions. We consider a robust variant of the well known k-median problem, where the clients are grouped into sets. We want to minimize the connection cost of the expensive group. This solution is robust against which group we actually need to serve. We show that this problem is hard to approximate, even on the line, and evaluate heuristic solutions.Effizienz von Algorithmen und Zuverlässigkeit gegen Fehlern in ihrer Implementierung oder Unsicherheiten in der Eingabe ist in der Informatik von großem Interesse. Diese Dissertation präsentiert Ergebnisse für Probleme in diesem Themenfeld. Zertifizierende Algorithmen ermöglichen zuverlässige Implementierungen durch Berechnung eines Zertifikats für ihre Antworten. Ein einfaches Programm kann die Antworten mit den Zertifikaten überprüfen. Der Nutzer muss nur dem einfachen Programms vertrauen. Wir präsentieren einen neuen zertifizierenden Algorithmus für 3-Kantenzusammenhang und einen vereinfachten zertifizierenden Algorithmus für 3-Knotenzusammenhang. Den Zustand einer Berechnung gelegentlich zu speichern, sog. Checkpointing, verbessert die Zuverlässigkeit. Im Fehlerfall kann ein gespeicherter Zustand wiederhergestellt werden ohne die Berechnung neu zu beginnen. Wir zeigen Strategien für Checkpointing mit begrenztem Speicher, die die Neuberechnungszeit minimieren. Traditionell sind die Eingaben für Probleme präzise und wohldefiniert. In der Praxis beinhalten die Eingaben allerdings Unsicherheiten und man braucht robuste Lösungen. Wir betrachten eine robuste Variante des k-median Problem. Hier sind die Kunden in Gruppen eingeteilt und wir möchten die Kosten der teuersten Gruppe minimieren. Dies macht die Lösung robust gegenüber welche der Gruppen letztlich bedient werden soll. Wir zeigen, dass dieses Problem schwer zu approximieren ist und untersuchen Heuristiken

    Online Checkpointing with Improved Worst-Case Guarantees

    No full text
    Abstract. In the online checkpointing problem, the task is to continuously maintain a set of k checkpoints that allow to rewind an ongoing computation faster than by a full restart. The only operation allowed is to remove an old checkpoint and to store the current state instead. Our aim are checkpoint placement strategies that minimize rewinding cost, i.e., such that at all times T when requested to rewind to some time t ≤ T the number of computation steps that need to be redone to get to t from a checkpoint before t is as small as possible. In particular, we want that the closest checkpoint earlier than t is not further away from t than pk times the ideal distance T/(k + 1), where pk is a small constant. Improving over earlier work showing 1 + 1/k ≤ pk ≤ 2, we show that pk can be chosen less than 2 uniformly for all k. More precisely, we show the uniform bound pk ≤ 1.7 for all k, and present algorithms with asymptotic performance pk ≤ 1.59 + o(1) valid for all k and pk ≤ ln(4) + o(1) ≤ 1.39+o(1) valid for k being a power of two. For small values of k, we show how to use a linear programming approach to compute good checkpointing algorithms. This gives performances of less than 1.53 for k ≤ 10. One the more theoretical side, we show the first lower bound that is asymptotically more than one, namely pk ≥ 1.30 − o(1). We also show that optimal algorithms (yielding the infimum performance) exist for all k.

    Online Checkpointing with Improved Worst-case Guarantees

    No full text
    In the online checkpointing problem, the task is to continuously maintain a set of k checkpoints that allow to rewind an ongoing computation faster than by a full restart. The only operation allowed is to replace an old checkpoint by the current state. Our aim are checkpoint placement strategies that minimize rewinding cost, i.e., such that at all times T when requested to rewind to some time t ≤ T the number of computation steps that need to be redone to get to t from a checkpoint before t is as small as possible. In particular, we want that the closest checkpoint earlier than t is not further away from t than q_k times the ideal distance T / (k+1), where q_k is a small constant. Improving over earlier work showing 1 + 1/k ≤ q_k ≤ 2, we show that q_k can be chosen asymptotically less than 2. We present algorithms with asymptotic discrepancy q_k ≤ 1.59 + o(1) valid for all k and q_k ≤ \ln(4) + o(1) ≤ 1.39 + o(1) valid for k being a power of two. Experiments indicate the uniform bound p_k ≤ 1.7 for all k. For small k, we show how to use a linear programming approach to compute good checkpointing algorithms. This gives discrepancies of less than 1.55 for all k < 60. We prove the first lower bound that is asymptotically more than one, namely q_k ≥ 1.30 - o(1). We also show that optimal algorithms (yielding the infimum discrepancy) exist for all~k

    Sampling from discrete distributions and computing Fréchet distances

    Get PDF
    In the first part of this thesis, we study the fundamental problem of sampling from a discrete probability distribution. Specifically, given non-negative numbers p_1,...,p_n the task is to draw i with probability proportional to p_i. We extend the classic solution to this problem, Walker's alias method, in various directions: We improve its space requirements, we solve the special case of sorted input, we study sampling natural distributions on a bounded precision machine, and as an application we speed up sampling a model from physics. The second part of this thesis belongs to the area of computational geometry and deals with algorithms for the Fréchet distance, which is a popular measure of similarity of two curves and can be computed in quadratic time (ignoring logarithmic factors). We provide the first conditional lower bound for this problem: No polynomial factor improvement over the quadratic running time is possible unless the Strong Exponential Time Hypothesis fails. We also present an improved approximation algorithm for realistic input curves.Im ersten Teil dieser Dissertation untersuchen wir das fundamentale Problem des Ziehens einer Zufallsvariablen von einer gegebenen diskreten Wahrscheinlichkeitsverteilung. Die Aufgabe ist, gegeben nichtnegative Zahlen p_1,...,p_n, eine Zahl i mit Wahrscheinlichkeit proportional zu p_i zu ziehen. Wir erweitern die klassische Lösung dieses Problems, Walkers Aliasmethode, in verschiedene Richtungen: Wir verbessern ihren Speicherbedarf, wir lösen den Spezialfall von sortierter Eingabe, wir untersuchen das Ziehen von natürlichen Verteilungen auf Maschinen mit beschränkter Präzision, und als Anwendung beschleunigen wir die Simulation eines physikalischen Modells. Der zweite Teil dieser Dissertation gehört zum Gebiet der Computergeometrie und beschäftigt sich mit Algorithmen für die Fréchetdistanz, die ein beliebtes Ähnlichkeitsmaß zweier Kurven ist und in quadratischer Zeit berechnet werden kann (bis auf logarithmische Faktoren). Wir zeigen die erste bedingte untere Schranke für dieses Problem: Keine Verbesserung um einen polynomiellen Faktor ist möglich unter der starken Exponentialzeithypothese. Zudem präsentieren wir einen verbesserten Approximationsalgorithmus für realistische Eingabekurven

    Online Checkpointing with Improved Worst-Case Guarantees

    No full text
    corecore