29 research outputs found

    Proceedings of Workshop on Quantum Computing and Quantum Information

    Get PDF

    On graph algorithms for large-scale graphs

    Get PDF
    Die Anforderungen an Algorithmen hat sich in den letzten Jahren grundlegend geändert. Die Datengröße der zu verarbeitenden Daten wächst schneller als die zur Verfügung stehende Rechengeschwindigkeit. Daher sind neue Algorithmen auf sehr großen Graphen wie z.B. soziale Netzwerke, Computernetzwerke oder Zustandsübergangsgraphen entwickelt worden, um das Problem der immer größer werdenden Daten zu bewältigen. Diese Arbeit beschäftigt sich mit zwei Herangehensweisen für dieses Problem. Implizite Algorithmen benutzten eine verlustfreie Kompression der Daten, um die Datengröße zu reduzieren, und arbeiten direkt mit den komprimierten Daten, um Optimierungsprobleme zu lösen. Graphen werden hier anhand der charakteristischen Funktion der Kantenmenge dargestellt, welche mit Hilfe von Ordered Binary Decision Diagrams (OBDDs) – eine bekannte Datenstruktur für Boolesche Funktionen - repräsentiert werden können. Wir entwickeln in dieser Arbeit neue Techniken, um die OBDD-Größe von Graphen zu bestimmen, und wenden diese Technik für mehrere Klassen von Graphen an und erhalten damit (fast) optimale Schranken für die OBDD-Größen. Kleine Eingabe-OBDDs sind essenziell für eine schnelle Verarbeitung, aber wir brauchen auch Algorithmen, die große Zwischenergebnisse während der Ausführung vermeiden. Hierfür entwickeln wir Algorithmen für bestimme Graphklassen, die die Kodierung der Knoten ausnutzt, die wir für die Resultate der OBDD-Größe benutzt haben. Zusätzlich legen wir die Grundlage für die Betrachtung von randomisierten OBDD-basierten Algorithmen, indem wir untersuchen, welche Art von Zufall wir hier verwenden und wie wir damit Algorithmen entwerfen können. Im Zuge dessen geben wir zwei randomisierte Algorithmen an, die ihre entsprechenden deterministischen Algorithmen in einer experimentellen Auswertung überlegen sind. Datenstromalgoritmen sind eine weitere Möglichkeit für die Bearbeitung von großen Graphen. In diesem Modell wird der Graph anhand eines Datenstroms von Kanteneinfügungen repräsentiert und den Algorithmen steht nur eine begrenzte Menge von Speicher zur Verfügung. Lösungen für Graphoptimierungsprobleme benötigen häufig eine lineare Größe bzgl. der Anzahl der Knoten, was eine triviale untere Schranke für die Streamingalgorithmen für diese Probleme impliziert. Die Berechnung eines Matching ist so ein Beispiel, was aber in letzter Zeit viel Aufmerksamkeit in der Streaming-Community auf sich gezogen hat. Ein Matching ist eine Menge von Kanten, so dass keine zwei Kanten einen gemeinsamen Knoten besitzen. Wenn wir nur an der Größe oder dem Gewicht (im Falle von gewichteten Graphen) eines Matching interessiert sind, ist es mögliche diese lineare untere Schranke zu durchbrechen. Wir konzentrieren uns in dieser Arbeit auf dynamische Datenströme, wo auch Kanten gelöscht werden können. Wir reduzieren das Problem, einen Schätzer für ein gewichtsoptimales Matching zu finden, auf das Problem, die Größe von Matchings zu approximieren, wobei wir einen kleinen Verlust bzgl. der Approximationsgüte in Kauf nehmen müssen. Außerdem präsentieren wir den ersten dynamischen Streamingalgorithmus, der die Größe von Matchings in lokal spärlichen Graphen approximiert. Für kleine Approximationsfaktoren zeigen wir eine untere Schranke für den Platzbedarf von Streamingalgorithmen, die die Matchinggröße approximieren.The algorithmic challenges have changed in the last decade due to the rapid growth of the data set sizes that need to be processed. New types of algorithms on large graphs like social graphs, computer networks, or state transition graphs have emerged to overcome the problem of ever-increasing data sets. In this thesis, we investigate two approaches to this problem. Implicit algorithms utilize lossless compression of data to reduce the size and to directly work on this compressed representation to solve optimization problems. In the case of graphs we are dealing with the characteristic function of the edge set which can be represented by Ordered Binary Decision Diagrams (OBDDs), a well-known data structure for Boolean functions. We develop a new technique to prove upper and lower bounds on the size of OBDDs representing graphs and apply this technique to several graph classes to obtain (almost) optimal bounds. A small input OBDD size is absolutely essential for dealing with large graphs but we also need algorithms that avoid large intermediate results during the computation. For this purpose, we design algorithms for a specific graph class that exploit the encoding of the nodes that we use for the results on the OBDD sizes. In addition, we lay the foundation on the theory of randomization in OBDD-based algorithms by investigating what kind of randomness is feasible and how to design algorithms with it. As a result, we present two randomized algorithms that outperform known deterministic algorithms on many input instances. Streaming algorithms are another approach for dealing with large graphs. In this model, the graph is presented one-by-one in a stream of edge insertions or deletions and the algorithms are permitted to use only a limited amount of memory. Often, the solution to an optimization problem on graphs can require up to a linear amount of space with respect to the number of nodes, resulting in a trivial lower bound for the space requirement of any streaming algorithm for those problems. Computing a matching, i. e., a subset of edges where no two edges are incident to a common node, is an example which has recently attracted a lot of attention in the streaming setting. If we are interested in the size (or weight in case of weighted graphs) of a matching, it is possible to break this linear bound. We focus on so-called dynamic graph streams where edges can be inserted and deleted and reduce the problem of estimating the weight of a matching to the problem of estimating the size of a maximum matching with a small loss in the approximation factor. In addition, we present the first dynamic graph stream algorithm for estimating the size of a matching in graphs which are locally sparse. On the negative side, we prove a space lower bound of streaming algorithms that estimate the size of a maximum matching with a small approximation factor

    Efficient local search for Pseudo Boolean Optimization

    Get PDF
    Algorithms and the Foundations of Software technolog

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Algorithms for regression and classification

    Get PDF
    Regression and classification are statistical techniques that may be used to extract rules and patterns out of data sets. Analyzing the involved algorithms comprises interdisciplinary research that offers interesting problems for statisticians and computer scientists alike. The focus of this thesis is on robust regression and classification in genetic association studies. In the context of robust regression, new exact algorithms and results for robust online scale estimation with the estimators Qn and Sn and for robust linear regression in the plane with the estimator least quartile difference (LQD) are presented. Additionally, an evolutionary computation algorithm for robust regression with different estimators in higher dimensions is devised. These estimators include the widely used least median of squares (LMS) and least trimmed squares (LTS). For classification in genetic association studies, this thesis describes a Genetic Programming algorithm that outpeforms the standard approaches on the considered data sets. It is able to identify interesting genetic factors not found before in a data set on sporadic breast cancer and to handle larger data sets than the compared methods. In addition, it is extendible to further application fields

    Reasoning about LTL Synthesis over finite and infinite games

    Get PDF
    In the last few years, research formal methods for the analysis and the verification of properties of systems has increased greatly. A meaningful contribution in this area has been given by algorithmic methods developed in the context of synthesis. The basic idea is simple and appealing: instead of developing a system and verifying that it satisfies its specification, we look for an automated procedure that, given the specification returns a system that is correct by construction. Synthesis of reactive systems is one of the most popular variants of this problem, in which we want to synthesize a system characterized by an ongoing interaction with the environment. In this setting, large effort has been devoted to analyze specifications given as formulas of linear temporal logic, i.e., LTL synthesis. Traditional approaches to LTL synthesis rely on transforming the LTL specification into parity deterministic automata, and then to parity games, for which a so-called winning region is computed. Computing such an automaton is, in the worst-case, double-exponential in the size of the LTL formula, and this becomes a computational bottleneck in using the synthesis process in practice. The first part of this thesis is devoted to improve the solution of parity games as they are used in solving LTL synthesis, trying to give efficient techniques, in terms of running time and space consumption, for solving parity games. We start with the study and the implementation of an automata-theoretic technique to solve parity games. More precisely, we consider an algorithm introduced by Kupferman and Vardi that solves a parity game by solving the emptiness problem of a corresponding alternating parity automaton. Our empirical evaluation demonstrates that this algorithm outperforms other algorithms when the game has a small number of priorities relative to the size of the game. In many concrete applications, we do indeed end up with parity games where the number of priorities is relatively small. This makes the new algorithm quite useful in practice. We then provide a broad investigation of the symbolic approach for solving parity games. Specifically, we implement in a fresh tool, called SPGSolver, four symbolic algorithms to solve parity games and compare their performances to the corresponding explicit versions for different classes of games. By means of benchmarks, we show that for random games, even for constrained random games, explicit algorithms actually perform better than symbolic algorithms. The situation changes, however, for structured games, where symbolic algorithms seem to have the advantage. This suggests that when evaluating algorithms for parity-game solving, it would be useful to have real benchmarks and not only random benchmarks, as the common practice has been. LTL synthesis has been largely investigated also in artificial intelligence, and specifically in automated planning. Indeed, LTL synthesis corresponds to fully observable nondeterministic planning in which the domain is given compactly and the goal is an LTL formula, that in turn is related to two-player games with LTL goals. Finding a strategy for these games means to synthesize a plan for the planning problem. The last part of this thesis is then dedicated to investigate LTL synthesis under this different view. In particular, we study a generalized form of planning under partial observability, in which we have multiple, possibly infinitely many, planning domains with the same actions and observations, and goals expressed over observations, which are possibly temporally extended. By building on work on two-player games with imperfect information in the Formal Methods literature, we devise a general technique, generalizing the belief-state construction, to remove partial observability. This reduces the planning problem to a game of perfect information with a tight correspondence between plans and strategies. Then we instantiate the technique and solve some generalized planning problems
    corecore