166 research outputs found

    OBDD-Based Representation of Interval Graphs

    Full text link
    A graph G=(V,E)G = (V,E) can be described by the characteristic function of the edge set χE\chi_E which maps a pair of binary encoded nodes to 1 iff the nodes are adjacent. Using \emph{Ordered Binary Decision Diagrams} (OBDDs) to store χE\chi_E can lead to a (hopefully) compact representation. Given the OBDD as an input, symbolic/implicit OBDD-based graph algorithms can solve optimization problems by mainly using functional operations, e.g. quantification or binary synthesis. While the OBDD representation size can not be small in general, it can be provable small for special graph classes and then also lead to fast algorithms. In this paper, we show that the OBDD size of unit interval graphs is O( V /log V )O(\ | V \ | /\log \ | V \ |) and the OBDD size of interval graphs is $O(\ | V \ | \log \ | V \ |)whichbothimproveaknownresultfromNunkesserandWoelfel(2009).Furthermore,wecanshowthatusingourvariableorderandnodelabelingforintervalgraphstheworstcaseOBDDsizeis which both improve a known result from Nunkesser and Woelfel (2009). Furthermore, we can show that using our variable order and node labeling for interval graphs the worst-case OBDD size is \Omega(\ | V \ | \log \ | V \ |).Weusethestructureoftheadjacencymatricestoprovethesebounds.Thismethodmaybeofindependentinterestandcanbeappliedtoothergraphclasses.Wealsodevelopamaximummatchingalgorithmonunitintervalgraphsusing. We use the structure of the adjacency matrices to prove these bounds. This method may be of independent interest and can be applied to other graph classes. We also develop a maximum matching algorithm on unit interval graphs using O(\log \ | V \ |)operationsandacoloringalgorithmforunitandgeneralintervalsgraphsusing operations and a coloring algorithm for unit and general intervals graphs using O(\log^2 \ | V \ |)$ operations and evaluate the algorithms empirically.Comment: 29 pages, accepted for 39th International Workshop on Graph-Theoretic Concepts 201

    Representation of graphs by OBDDs

    Get PDF
    AbstractRecently, it has been shown in a series of works that the representation of graphs by Ordered Binary Decision Diagrams (OBDDs) often leads to good algorithmic behavior. However, the question for which graph classes an OBDD representation is advantageous, has not been investigated, yet. In this paper, the space requirements for the OBDD representation of certain graph classes, specifically cographs, several types of graphs with few P4s, unit interval graphs, interval graphs and bipartite graphs are investigated. Upper and lower bounds are proven for all these graph classes and it is shown that in most (but not all) cases a representation of the graphs by OBDDs is advantageous with respect to space requirements

    Classification of OBDD Size for Monotone 2-CNFs

    Get PDF
    We introduce a new graph parameter called linear upper maximum induced matching width lu-mim width, denoted for a graph G by lu(G). We prove that the smallest size of the obdd for ?, the monotone 2-cnf corresponding to G, is sandwiched between 2^{lu(G)} and n^{O(lu(G))}. The upper bound is based on a combinatorial statement that might be of an independent interest. We show that the bounds in terms of this parameter are best possible. The new parameter is closely related to two existing parameters: linear maximum induced matching width (lmim width) and linear special induced matching width (lsim width). We prove that lu-mim width lies strictly in between these two parameters, being dominated by lsim width and dominating lmim width. We conclude that neither of the two existing parameters can be used instead of lu-mim width to characterize the size of obdds for monotone 2-cnfs and this justifies introduction of the new parameter

    On graph algorithms for large-scale graphs

    Get PDF
    Die Anforderungen an Algorithmen hat sich in den letzten Jahren grundlegend geändert. Die Datengröße der zu verarbeitenden Daten wächst schneller als die zur Verfügung stehende Rechengeschwindigkeit. Daher sind neue Algorithmen auf sehr großen Graphen wie z.B. soziale Netzwerke, Computernetzwerke oder Zustandsübergangsgraphen entwickelt worden, um das Problem der immer größer werdenden Daten zu bewältigen. Diese Arbeit beschäftigt sich mit zwei Herangehensweisen für dieses Problem. Implizite Algorithmen benutzten eine verlustfreie Kompression der Daten, um die Datengröße zu reduzieren, und arbeiten direkt mit den komprimierten Daten, um Optimierungsprobleme zu lösen. Graphen werden hier anhand der charakteristischen Funktion der Kantenmenge dargestellt, welche mit Hilfe von Ordered Binary Decision Diagrams (OBDDs) – eine bekannte Datenstruktur für Boolesche Funktionen - repräsentiert werden können. Wir entwickeln in dieser Arbeit neue Techniken, um die OBDD-Größe von Graphen zu bestimmen, und wenden diese Technik für mehrere Klassen von Graphen an und erhalten damit (fast) optimale Schranken für die OBDD-Größen. Kleine Eingabe-OBDDs sind essenziell für eine schnelle Verarbeitung, aber wir brauchen auch Algorithmen, die große Zwischenergebnisse während der Ausführung vermeiden. Hierfür entwickeln wir Algorithmen für bestimme Graphklassen, die die Kodierung der Knoten ausnutzt, die wir für die Resultate der OBDD-Größe benutzt haben. Zusätzlich legen wir die Grundlage für die Betrachtung von randomisierten OBDD-basierten Algorithmen, indem wir untersuchen, welche Art von Zufall wir hier verwenden und wie wir damit Algorithmen entwerfen können. Im Zuge dessen geben wir zwei randomisierte Algorithmen an, die ihre entsprechenden deterministischen Algorithmen in einer experimentellen Auswertung überlegen sind. Datenstromalgoritmen sind eine weitere Möglichkeit für die Bearbeitung von großen Graphen. In diesem Modell wird der Graph anhand eines Datenstroms von Kanteneinfügungen repräsentiert und den Algorithmen steht nur eine begrenzte Menge von Speicher zur Verfügung. Lösungen für Graphoptimierungsprobleme benötigen häufig eine lineare Größe bzgl. der Anzahl der Knoten, was eine triviale untere Schranke für die Streamingalgorithmen für diese Probleme impliziert. Die Berechnung eines Matching ist so ein Beispiel, was aber in letzter Zeit viel Aufmerksamkeit in der Streaming-Community auf sich gezogen hat. Ein Matching ist eine Menge von Kanten, so dass keine zwei Kanten einen gemeinsamen Knoten besitzen. Wenn wir nur an der Größe oder dem Gewicht (im Falle von gewichteten Graphen) eines Matching interessiert sind, ist es mögliche diese lineare untere Schranke zu durchbrechen. Wir konzentrieren uns in dieser Arbeit auf dynamische Datenströme, wo auch Kanten gelöscht werden können. Wir reduzieren das Problem, einen Schätzer für ein gewichtsoptimales Matching zu finden, auf das Problem, die Größe von Matchings zu approximieren, wobei wir einen kleinen Verlust bzgl. der Approximationsgüte in Kauf nehmen müssen. Außerdem präsentieren wir den ersten dynamischen Streamingalgorithmus, der die Größe von Matchings in lokal spärlichen Graphen approximiert. Für kleine Approximationsfaktoren zeigen wir eine untere Schranke für den Platzbedarf von Streamingalgorithmen, die die Matchinggröße approximieren.The algorithmic challenges have changed in the last decade due to the rapid growth of the data set sizes that need to be processed. New types of algorithms on large graphs like social graphs, computer networks, or state transition graphs have emerged to overcome the problem of ever-increasing data sets. In this thesis, we investigate two approaches to this problem. Implicit algorithms utilize lossless compression of data to reduce the size and to directly work on this compressed representation to solve optimization problems. In the case of graphs we are dealing with the characteristic function of the edge set which can be represented by Ordered Binary Decision Diagrams (OBDDs), a well-known data structure for Boolean functions. We develop a new technique to prove upper and lower bounds on the size of OBDDs representing graphs and apply this technique to several graph classes to obtain (almost) optimal bounds. A small input OBDD size is absolutely essential for dealing with large graphs but we also need algorithms that avoid large intermediate results during the computation. For this purpose, we design algorithms for a specific graph class that exploit the encoding of the nodes that we use for the results on the OBDD sizes. In addition, we lay the foundation on the theory of randomization in OBDD-based algorithms by investigating what kind of randomness is feasible and how to design algorithms with it. As a result, we present two randomized algorithms that outperform known deterministic algorithms on many input instances. Streaming algorithms are another approach for dealing with large graphs. In this model, the graph is presented one-by-one in a stream of edge insertions or deletions and the algorithms are permitted to use only a limited amount of memory. Often, the solution to an optimization problem on graphs can require up to a linear amount of space with respect to the number of nodes, resulting in a trivial lower bound for the space requirement of any streaming algorithm for those problems. Computing a matching, i. e., a subset of edges where no two edges are incident to a common node, is an example which has recently attracted a lot of attention in the streaming setting. If we are interested in the size (or weight in case of weighted graphs) of a matching, it is possible to break this linear bound. We focus on so-called dynamic graph streams where edges can be inserted and deleted and reduce the problem of estimating the weight of a matching to the problem of estimating the size of a maximum matching with a small loss in the approximation factor. In addition, we present the first dynamic graph stream algorithm for estimating the size of a matching in graphs which are locally sparse. On the negative side, we prove a space lower bound of streaming algorithms that estimate the size of a maximum matching with a small approximation factor

    Algebraic model counting

    Get PDF
    Weighted model counting (WMC) is a well-known inference task on knowledge bases, and the basis for some of the most efficient techniques for probabilistic inference in graphical models. We introduce algebraic model counting (AMC), a generalization of WMC to a semiring structure that provides a unified view on a range of tasks and existing results. We show that AMC generalizes many well-known tasks in a variety of domains such as probabilistic inference, soft constraints and network and database analysis. Furthermore, we investigate AMC from a knowledge compilation perspective and show that all AMC tasks can be evaluated using sd-DNNF circuits, which are strictly more succinct, and thus more efficient to evaluate, than direct representations of sets of models. We identify further characteristics of AMC instances that allow for evaluation on even more succinct circuits
    corecore