36 research outputs found

    Variable Compression in ProbLog

    Full text link
    In order to compute the probability of a query, ProbLog represents the proofs of the query as disjunctions of conjunctions, for which a Reduced Ordered Binary Decision Diagram (ROBDD) is computed. The paper identifies patterns of Boolean variables that occur in Boolean formulae, namely AND-clusters and OR-clusters. Our method compresses the variables in these clusters and thus reduces the size of ROBDDs without affecting the probability. We give a polynomial algorithm that detects AND-clusters in disjunctive normal form (DNF) Boolean formulae, or OR-clusters in conjunctive normal form (CNF) Boolean formulae. We do an experimental evaluation of the effects of AND-cluster compression for a real application of ProbLog. With our prototype implementation we have a significant improvement in performance (up to 87%) for the generation of ROBDDs. Moreover, compressing AND-clusters of Boolean variables in the DNFs makes it feasible to deal with ProbLog queries that give rise to larger DNFs.acceptance rate: 38%status: publishe

    Advances in Functional Decomposition: Theory and Applications

    Get PDF
    Functional decomposition aims at finding efficient representations for Boolean functions. It is used in many applications, including multi-level logic synthesis, formal verification, and testing. This dissertation presents novel heuristic algorithms for functional decomposition. These algorithms take advantage of suitable representations of the Boolean functions in order to be efficient. The first two algorithms compute simple-disjoint and disjoint-support decompositions. They are based on representing the target function by a Reduced Ordered Binary Decision Diagram (BDD). Unlike other BDD-based algorithms, the presented ones can deal with larger target functions and produce more decompositions without requiring expensive manipulations of the representation, particularly BDD reordering. The third algorithm also finds disjoint-support decompositions, but it is based on a technique which integrates circuit graph analysis and BDD-based decomposition. The combination of the two approaches results in an algorithm which is more robust than a purely BDD-based one, and that improves both the quality of the results and the running time. The fourth algorithm uses circuit graph analysis to obtain non-disjoint decompositions. We show that the problem of computing non-disjoint decompositions can be reduced to the problem of computing multiple-vertex dominators. We also prove that multiple-vertex dominators can be found in polynomial time. This result is important because there is no known polynomial time algorithm for computing all non-disjoint decompositions of a Boolean function. The fifth algorithm provides an efficient means to decompose a function at the circuit graph level, by using information derived from a BDD representation. This is done without the expensive circuit re-synthesis normally associated with BDD-based decomposition approaches. Finally we present two publications that resulted from the many detours we have taken along the winding path of our research

    Anytime algorithms for ROBDD symmetry detection and approximation

    Get PDF
    Reduced Ordered Binary Decision Diagrams (ROBDDs) provide a dense and memory efficient representation of Boolean functions. When ROBDDs are applied in logic synthesis, the problem arises of detecting both classical and generalised symmetries. State-of-the-art in symmetry detection is represented by Mishchenko's algorithm. Mishchenko showed how to detect symmetries in ROBDDs without the need for checking equivalence of all co-factor pairs. This work resulted in a practical algorithm for detecting all classical symmetries in an ROBDD in O(|G|³) set operations where |G| is the number of nodes in the ROBDD. Mishchenko and his colleagues subsequently extended the algorithm to find generalised symmetries. The extended algorithm retains the same asymptotic complexity for each type of generalised symmetry. Both the classical and generalised symmetry detection algorithms are monolithic in the sense that they only return a meaningful answer when they are left to run to completion. In this thesis we present efficient anytime algorithms for detecting both classical and generalised symmetries, that output pairs of symmetric variables until a prescribed time bound is exceeded. These anytime algorithms are complete in that given sufficient time they are guaranteed to find all symmetric pairs. Theoretically these algorithms reside in O(n³+n|G|+|G|³) and O(n³+n²|G|+|G|³) respectively, where n is the number of variables, so that in practice the advantage of anytime generality is not gained at the expense of efficiency. In fact, the anytime approach requires only very modest data structure support and offers unique opportunities for optimisation so the resulting algorithms are very efficient. The thesis continues by considering another class of anytime algorithms for ROBDDs that is motivated by the dearth of work on approximating ROBDDs. The need for approximation arises because many ROBDD operations result in an ROBDD whose size is quadratic in the size of the inputs. Furthermore, if ROBDDs are used in abstract interpretation, the running time of the analysis is related not only to the complexity of the individual ROBDD operations but also the number of operations applied. The number of operations is, in turn, constrained by the number of times a Boolean function can be weakened before stability is achieved. This thesis proposes a widening that can be used to both constrain the size of an ROBDD and also ensure that the number of times that it is weakened is bounded by some given constant. The widening can be used to either systematically approximate an ROBDD from above (i.e. derive a weaker function) or below (i.e. infer a stronger function). The thesis also considers how randomised techniques may be deployed to improve the speed of computing an approximation by avoiding potentially expensive ROBDD manipulation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Symmetry detection for large Boolean functions using circuit representation, simulation, and satisfiability

    Get PDF

    Decision procedures for equality logic with uninterpreted functions

    Get PDF
    In dit proefschrift presenteren we een aantal technieken om vervulbaarheid (satisfiability) vast te stellen binnen beslisbare delen van de eerste orde logica met gelijkheid. Het doel van dit proefschrift is voornamelijk het ontwikkelen van nieuwe technieken in plaats van het ontwikkelen van een effici¨ente implementatie om vervulbaarheid vast te stellen. Als algemeen logisch raamwerk gebruiken we de eerste orde predikaten logica zonder kwantoren. We beschrijven enkele basisprocedures om vervulbaarheid van propositionele formules vast te stellen: de DP procedure, de DPLL procedure, en een techniek gebaseerd op BDDs. Deze technieken zijn eigenlijk families van algoritmen in plaats van losse algoritmen. Hun gedrag wordt bepaald door een aantal keuzen die ze maken gedurende de uitvoering. We geven een formele beschrijving van resolutie, en we analyseren gedetailleerd de relatie tussen resolutie en DPLL. Het is bekend dat een DPLL bewijs van onvervulbaarheid (refutation) rechtstreeks kan worden getransformeerd naar een resolutie bewijs van onvervulbaarheid met een vergelijkbare lengte. In dit proefschrift wordt een transformatie ge¨introduceerd van zo’n DPLL bewijs naar een resolutie bewijs dat de kortst mogelijke lengte heeft. We presenteren GDPLL, een generalisatie van de DPLL procedure. Deze is bruikbaar voor het vervulbaarheidsprobleem voor beslisbare delen van de eerste orde logica zonder kwantoren. Voldoende eigenschappen worden ge¨identificeerd om de correctheid, de be¨eindiging en de volledigheid van GDPLL te bewijzen. We beschrijven manieren om vervulbaarheid vast te stellen binnen de logica met gelijkheid en niet-ge¨interpreteerde functies (EUF). Dit soort logica is voorgesteld om abstracte hardware ontwerpen te verifi¨eren. Het snel kunnen vaststellen van vervulbaarheid binnen deze logica is belangrijk om dergelijke verificaties te laten slagen. In de afgelopen jaren zijn er verschillende procedures voorgesteld om de vervulbaarheid van dergelijke formules vast te stellen. Wij beschrijven een nieuwe aanpak om vervulbaarheid vast te stellen van formules uit de logica met gelijkheid die in de conjunctieve normaal vorm zijn gegeven. Centraal in deze aanpak staat ´e´en enkele bewijsregel genaamd gelijkheidsresolutie. Voor deze ene regel bewijzen wij correctheid en volledigheid. Op grond van deze regel stellen we een volledige procedure voor om vervulbaarheid van dit soort formules vast te stellen, en we bewijzen de correctheid ervan. Daarnaast presenteren we nog een nieuwe procedure om vervulbaarheid vast te stellen van EUF-formules, gebaseerd op de GDPLL methode. Tot slot breiden we BDDs voor propositionele logica uit naar logica met gelijkheid. We bewijzen dat alle paden in deze uitgebreide BDDs vervulbaar zijn. In een constante hoeveelheid tijd kan vastgesteld worden of de formule een tautologie is, een tegenspraak is, of slechts vervulbaar is

    Parameterized verification and repair of concurrent systems

    Get PDF
    In this thesis, we present novel approaches for model checking, repair and synthesis of systems that may be parameterized in their number of components. The parameterized model checking problem (PMCP) is in general undecidable, and therefore the focus is on restricted classes of parameterized concurrent systems where the problem is decidable. Under certain conditions, the problem is decidable for guarded protocols, and for systems that communicate via a token, a pairwise, or a broadcast synchronization. In this thesis we improve existing results for guarded protocols and we show that the PMCP of guarded protocols and token passing systems is decidable for specifications that add a quantitative aspect to LTL, called Prompt-LTL. Furthermore, we present, to our knowledge, the first parameterized repair algorithm. The parameterized repair problem is to find a refinement of a process implementation p such that the concurrent system with an arbitrary number of instances of p is correct. We show how this algorithm can be used on classes of systems that can be represented as well structured transition systems (WSTS). Additionally we present two safety synthesis algorithms that utilize a lazy approach. Given a faulty system, the algorithms first symbolically model check the system, then the obtained error traces are analyzed to synthesize a candidate that has no such traces. Experimental results show that our algorithm solves a number of benchmarks that are intractable for existing tools. Furthermore, we introduce our tool AIGEN for generating random Boolean functions and transition systems in a symbolic representation.In dieser Arbeit stellen wir neuartige Ans atze für das Model-Checking, die Reparatur und die Synthese von Systemen vor, die in ihrer Anzahl von Komponenten parametrisiert sein können. Das Problem des parametrisierten Model-Checking (PMCP) ist im Allgemeinen unentscheidbar, und daher liegt der Fokus auf eingeschränkten Klassen parametrisierter synchroner Systeme, bei denen das Problem entscheidbar ist. Unter bestimmten Bedingungen ist das Problem für Guarded Protocols und für Systeme, die über ein Token, eine Pairwise oder eine Broadcast-Synchronisation kommunizieren, entscheidbar. In dieser Arbeit verbessern wir bestehende Ergebnisse für Guarded Protocols und zeigen die Entscheidbarkeit des PMCP für Guarded Protocols und Token-Passing Systeme mit Spezifikationen in der temporalen Logik Prompt-LTL, die LTL einen quantitativen Aspekt hinzufügt. Darüber hinaus präsentieren wir unseres Wissens den ersten parametrisierten Reparaturalgorithmus. Das parametrisierte Reparaturproblem besteht darin, eine Verfeinerung einer Prozessimplementierung p zu finden, so dass das synchrone Systeme mit einer beliebigen Anzahl von Instanzen von p korrekt ist. Wir zeigen, wie dieser Algorithmus auf Klassen von Systemen angewendet werden kann, die als Well Structured Transition Systems (WSTS) dargestellt werden können. Außerdem präsentieren wir zwei Safety-Synthesis Algorithmen, die einen "lazy" Ansatz verwenden. Bei einem fehlerhaften System überprüfen die Algorithmen das System symbolisch, dann werden die erhaltenen "Gegenbeispiel" analysiert, um einen Kandidaten zu synthetisieren der keine solchen Fehlerpfade hat. Versuchsergebnisse zeigen, dass unser Algorithmus eine Reihe von Benchmarks löst, die für bestehende Tools nicht lösbar sind. Darüber hinaus stellen wir unser Tool AIGEN zur Erzeugung zufälliger Boolescher Funktionen und Transitionssysteme in einer symbolischen Darstellung vor

    Aggressive aggregation

    Get PDF
    Among the first steps in a compilation pipeline is the construction of an Intermediate Representation (IR), an in-memory representation of the input program. Any attempt to program optimisation, both in terms of size and running time, has to operate on this structure. There may be one or multiple such IRs, however, most compilers use some form of a Control Flow Graph (CFG) internally. This representation clearly aims at general-purpose programming languages, for which it is well suited and allows for many classical program optimisations. On the other hand, a growing structural difference between the input program and the chosen IR can lose or obfuscate information that can be crucial for effective optimisation. With today’s rise of a multitude of different programming languages, Domain-Specific Languages (DSLs), and computing platforms, the classical machine-oriented IR is reaching its limits and a broader variety of IRs is needed. This realisation yielded, e.g., Multi-Level Intermediate Representation (MLIR), a compiler framework that facilitates the creation of a wide range of IRs and encourages their reuse among different programming languages and the corresponding compilers. In this modern spirit, this dissertation explores the potential of Algebraic Decision Diagrams (ADDs) as an IR for (domain-specific) program optimisation. The data structure remains the state of the art for Boolean function representation for more than thirty years and is well-known for its optimality in size and depth, i.e. running time. As such, it is ideally suited to represent the corresponding classes of programs in the role of an IR. We will discuss its application in a variety of different program domains, ranging from DSLs to machine-learned programs and even to general-purpose programming languages. Two representatives for DSLs, a graphical and a textual one, prove the adequacy of ADDs for the program optimisation of modelled decision services. The resulting DSLs facilitate experimentation with ADDs and provide valuable insight into their potential and limitations: input programs can be aggregated in a radical fashion, at the risk of the occasional exponential growth. With the aggregation of large Random Forests into a single aggregated ADD, we bring this potential to a program domain of practical relevance. The results are impressive: both running time and size of the Random Forest program are reduced by multiple orders of magnitude. It turns out that this ADD-based aggregation can be generalised, even to generaliii purpose programming languages. The resulting method achieves impressive speedups for a seemingly optimal program: the iterative Fibonacci implementation. Altogether, ADDs facilitate effective program optimisation where the input programs allow for a natural transformation to the data structure. In these cases, they have proven to be an extremely powerful tool for the optimisation of a program’s running time and, in some cases, of its size. The exploration of their potential as an IR has only started and deserves attention in future research

    Automated hierarchical service level agreements

    Get PDF
    The present dissertation concerns the area of Service Computing. More specifically, it contributes to the topic of enabling IT service stacks with dependability, such that they can be used even further in pragmatic business environments and applications. The instrument used for this purpose is a Service Level Agreement (SLA). The main focus is on SLA Hierarchies, which reflect corresponding Service Hierarchies. SLAs may be established manually, or automatically among software agents; it is mainly the latter case that is considered here. The thesis contributes by means of a formal problem definition for the construction of SLA hierarchies using a translation process, a management architecture, a formal model for defining penalties and a representation that facilitates the processing of SLAs. Using these tools, it is shown that automated SLA management in hierarchical setups is possible, through an application to Multi-Domain Infrastructure-as-a-Service. Within this specific technical area, different SLA-based resource capacity planning approaches are examined via simulation -- both for online and offline planning. The former case concerns normal runtime operations, and the thesis examines two greedy algorithms with regard to their energy-savings efficiency and their performance. In the latter case, a resource-scarce environment is simulated with the purpose of minimizing penalties from already established SLAs. This is achieved via formally-defined combinatorial models, which are solved and compared to two greedy algorithms

    A Succinct Approach to Static Analysis and Model Checking

    Get PDF
    corecore