7,139 research outputs found

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    On the Error Resilience of Ordered Binary Decision Diagrams

    Get PDF
    Ordered Binary Decision Diagrams (OBDDs) are a data structure that is used in an increasing number of fields of Computer Science (e.g., logic synthesis, program verification, data mining, bioinformatics, and data protection) for representing and manipulating discrete structures and Boolean functions. The purpose of this paper is to study the error resilience of OBDDs and to design a resilient version of this data structure, i.e., a self-repairing OBDD. In particular, we describe some strategies that make reduced ordered OBDDs resilient to errors in the indexes, that are associated to the input variables, or in the pointers (i.e., OBDD edges) of the nodes. These strategies exploit the inherent redundancy of the data structure, as well as the redundancy introduced by its efficient implementations. The solutions we propose allow the exact restoring of the original OBDD and are suitable to be applied to classical software packages for the manipulation of OBDDs currently in use. Another result of the paper is the definition of a new canonical OBDD model, called {\em Index-resilient Reduced OBDD}, which guarantees that a node with a faulty index has a reconstruction cost O(k)O(k), where kk is the number of nodes with corrupted index

    Lex-Partitioning: A New Option for BDD Search

    Full text link
    For the exploration of large state spaces, symbolic search using binary decision diagrams (BDDs) can save huge amounts of memory and computation time. State sets are represented and modified by accessing and manipulating their characteristic functions. BDD partitioning is used to compute the image as the disjunction of smaller subimages. In this paper, we propose a novel BDD partitioning option. The partitioning is lexicographical in the binary representation of the states contained in the set that is represented by a BDD and uniform with respect to the number of states represented. The motivation of controlling the state set sizes in the partitioning is to eventually bridge the gap between explicit and symbolic search. Let n be the size of the binary state vector. We propose an O(n) ranking and unranking scheme that supports negated edges and operates on top of precomputed satcount values. For the uniform split of a BDD, we then use unranking to provide paths along which we partition the BDDs. In a shared BDD representation the efforts are O(n). The algorithms are fully integrated in the CUDD library and evaluated in strongly solving general game playing benchmarks.Comment: In Proceedings GRAPHITE 2012, arXiv:1210.611

    Efficient parallel binary decision diagram construction using Cilk

    Get PDF
    Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (leaves 44-45).by David B. Berman.S.B.and M.Eng

    Multi-core Decision Diagrams

    Get PDF
    Decision diagrams are fundamental data structures that revolutionized fields such as model checking, automated reasoning and decision processes. As performance gains in the current era mostly come from parallel processing, an ongoing challenge is to develop data structures and algorithms for modern multicore architectures. This chapter describes the parallelization of decision diagram operations as implemented in the parallel decision diagram package Sylvan, which allows sequential algorithms that use decision diagrams to exploit the power of multi-core machines

    A Decision Diagram Operation for Reachability

    Full text link
    Saturation is considered the state-of-the-art method for computing fixpoints with decision diagrams. We present a relatively simple decision diagram operation called REACH that also computes fixpoints. In contrast to saturation, it does not require a partitioning of the transition relation. We give sequential algorithms implementing the new operation for both binary and multi-valued decision diagrams, and moreover provide parallel counterparts. We implement these algorithms and experimentally compare their performance against saturation on 692 model checking benchmarks in different languages. The results show that the REACH operation often outperforms saturation, especially on transition relations with low locality. In a comparison between parallelized versions of REACH and saturation we find that REACH obtains comparable speedups up to 16 cores, although falls behind saturation at 64 cores. Finally, in a comparison with the state-of-the-art model checking tool ITS-tools we find that REACH outperforms ITS-tools on 29% of models, suggesting that REACH can be useful as a complementary method in an ensemble tool

    Learning Task Specifications from Demonstrations

    Full text link
    Real world applications often naturally decompose into several sub-tasks. In many settings (e.g., robotics) demonstrations provide a natural way to specify the sub-tasks. However, most methods for learning from demonstrations either do not provide guarantees that the artifacts learned for the sub-tasks can be safely recombined or limit the types of composition available. Motivated by this deficit, we consider the problem of inferring Boolean non-Markovian rewards (also known as logical trace properties or specifications) from demonstrations provided by an agent operating in an uncertain, stochastic environment. Crucially, specifications admit well-defined composition rules that are typically easy to interpret. In this paper, we formulate the specification inference task as a maximum a posteriori (MAP) probability inference problem, apply the principle of maximum entropy to derive an analytic demonstration likelihood model and give an efficient approach to search for the most likely specification in a large candidate pool of specifications. In our experiments, we demonstrate how learning specifications can help avoid common problems that often arise due to ad-hoc reward composition.Comment: NIPS 201

    Predicting Memory Demands of BDD Operations using Maximum Graph Cuts (Extended Paper)

    Full text link
    The BDD package Adiar manipulates Binary Decision Diagrams (BDDs) in external memory. This enables handling big BDDs, but the performance suffers when dealing with moderate-sized BDDs. This is mostly due to initializing expensive external memory data structures, even if their contents can fit entirely inside internal memory. The contents of these auxiliary data structures always correspond to a graph cut in an input or output BDD. Specifically, these cuts respect the levels of the BDD. We formalise the shape of these cuts and prove sound upper bounds on their maximum size for each BDD operation. We have implemented these upper bounds within Adiar. With these bounds, it can predict whether a faster internal memory variant of the auxiliary data structures can be used. In practice, this improves Adiar's running time across the board. Specifically for the moderate-sized BDDs, this results in an average reduction of the computation time by 86.1% (median of 89.7%). In some cases, the difference is even 99.9\%. When checking equivalence of hardware circuits from the EPFL Benchmark Suite, for one of the instances the time was decreased by 52 hours.Comment: 25 pages, 11 Figures, 2 Tables. Extended version of paper published at ATVA 202

    JINC - A Multi-Threaded Library for Higher-Order Weighted Decision Diagram Manipulation

    Get PDF
    Ordered Binary Decision Diagrams (OBDDs) have been proven to be an efficient data structure for symbolic algorithms. The efficiency of the symbolic methods de- pends on the underlying OBDD library. Available OBDD libraries are based on the standard concepts and so far only differ in implementation details. This thesis introduces new techniques to increase run-time and space-efficiency of an OBDD library. This thesis introduces the framework of Higher-Order Weighted Decision Diagrams (HOWDDs) to combine the similarities of different OBDD variants. This frame- work pioneers the basis for the new variant Toggling Algebraic Decision Diagrams (TADDs) which has been shown to be a space-efficient HOWDD variant for sym- bolic matrix representation. The concept of HOWDDs has been use to implement the OBDD library JINC. This thesis also analyzes the usage of multi-threading techniques to speed-up OBDD manipulations. A new reordering framework ap- plies the advantages of multi-threading techniques to reordering algorithms. This approach uses an abstraction layer so that the original reordering algorithms are not touched. The challenge that arise from a straight forward algorithm is that the computed-tables and the garbage collection are not as efficient as in a single- threaded environment. We resolve this problem by developing a new multi-operand APPLY algorithm that eliminates the creation of temporary nodes which could occur during computation and thus reduces the need for caching or garbage collection. The HOWDD framework leads to an efficient library design which has been shown to be more efficient than the established OBDD library CUDD. The HOWDD instance TADD reduces the needed number of nodes by factor two compared to ordinary ADDs. The new multi-threading approaches are more efficient than single-threading approaches by several factors. In the case of the new reordering framework the speed- up almost equals the theoretical optimal speed-up. The novel multi-operand APPLY algorithm reduces the memory usage for the n-queens problem by factor 50 which enables the calculation of bigger problem instances compared to the traditional APPLY approach. The new approaches improve the performance and reduce the memory footprint. This leads to the conclusion that applications should be reviewed whether they could benefit from the new multi-threading multi-operand approaches introduced and discussed in this thesis
    corecore