2,533 research outputs found

    Component-wise incremental LTL model checking

    Get PDF
    Efficient symbolic and explicit-state model checking approaches have been developed for the verification of linear time temporal logic (LTL) properties. Several attempts have been made to combine the advantages of the various algorithms. Model checking LTL properties usually poses two challenges: one must compute the synchronous product of the state space and the automaton model of the desired property, then look for counterexamples that is reduced to finding strongly connected components (SCCs) in the state space of the product. In case of concurrent systems, where the phenomenon of state space explosion often prevents the successful verification, the so-called saturation algorithm has proved its efficiency in state space exploration. This paper proposes a new approach that leverages the saturation algorithm both as an iteration strategy constructing the product directly, as well as in a new fixed-point computation algorithm to find strongly connected components on-the-fly by incrementally processing the components of the model. Complementing the search for SCCs, explicit techniques and component-wise abstractions are used to prove the absence of counterexamples. The resulting on-the-fly, incremental LTL model checking algorithm proved to scale well with the size of models, as the evaluation on models of the Model Checking Contest suggests

    Witness generation in existential CTL model checking

    Get PDF
    Hardware and software systems are widely used in applications where failure is prohibitively costly or even unacceptable. The main obstacle to make such systems more reliable and capable of more complex and sensitive tasks is our limited ability to design and implement them with sufficiently high degree of confidence in their correctness under all circumstances. As an automated technique that verifies the system early in the design phase, model checking explores the state space of the system exhaustively and rigorously to determine if the system satisfies the specifications and detect fatal errors that may be missed by simulation and testing. One essential advantage of model checking is the capability to generate witnesses and counterexamples. They are simple and straightforward forms to prove an existential specification or falsify a universal specification. Beside enhancing the credibility of the model checker\u27s conclusion, they either strengthen engineers\u27 confidence in the system or provide hints to reveal potential defects. In this dissertation, we focus on symbolic model checking with specifications expressed in computation tree logic (CTL), which describes branching-time behaviors of the system, and investigate the witness generation techniques for the existential fragment of CTL, i.e., ECTL, covering both decision-diagram-based and SAT-based. Since witnesses provide important debugging information and may be inspected by engineers, smaller ones are always preferable to ease their interpretation and understanding. To the best of our knowledge, no existing witness generation technique guarantees the minimality for a general ECTL formula with nested existential CTL operators. One contribution of this dissertation is to fill this gap with the minimality guarantee. With the help of the saturation algorithm, our approach computes the minimum witness size for the given ECTL formula in every state, stored as an additive edge-valued multiway decision diagrams (EV+MDD), a variant of the well-known binary decision diagram (BDD), and then builds a minimum witness. Though computationally intensive, this has promising applications in reducing engineers\u27 workload. SAT-based model checking, in particular, bounded model checking, reduces a model checking problem problem into a satisfiability problem and leverages a SAT solver to solve it. Another contribution of this dissertation is to improve the translation of bounded semantics of ECTL into propositional formulas. By realizing the possibility of path reuse, i.e., a state may build its own witness by reusing its successor\u27s, we may generate a significantly smaller formula, which is often easier for a SAT solver to answer, and thus boost the performance of bounded model checking

    BDD-Based Algorithm for SCC Decomposition of Edge-Coloured Graphs

    Get PDF
    Edge-coloured directed graphs provide an essential structure for modelling and analysis of complex systems arising in many scientific disciplines (e.g. feature-oriented systems, gene regulatory networks, etc.). One of the fundamental problems for edge-coloured graphs is the detection of strongly connected components, or SCCs. The size of edge-coloured graphs appearing in practice can be enormous both in the number of vertices and colours. The large number of vertices prevents us from analysing such graphs using explicit SCC detection algorithms, such as Tarjan's, which motivates the use of a symbolic approach. However, the large number of colours also renders existing symbolic SCC detection algorithms impractical. This paper proposes a novel algorithm that symbolically computes all the monochromatic strongly connected components of an edge-coloured graph. In the worst case, the algorithm performs O(pnlog n)O(p \cdot n \cdot log~n) symbolic steps, where pp is the number of colours and nn is the number of vertices. We evaluate the algorithm using an experimental implementation based on binary decision diagrams (BDDs). Specifically, we use our implementation to explore the SCCs of a large collection of coloured graphs (up to 2482^{48}) obtained from Boolean networks -- a modelling framework commonly appearing in systems biology

    Algorithms to Compute Characteristic Classes

    Get PDF
    In this thesis we develop several new algorithms to compute characteristics classes in a variety of settings. In addition to algorithms for the computation of the Euler characteristic, a classical topological invariant, we also give algorithms to compute the Segre class and Chern-Schwartz-MacPherson (CSM) class. These invariants can in turn be used to compute other common invariants such as the Chern-Fulton class (or the Chern class in smooth cases). We begin with subschemes of a projective space over an algebraically closed field of characteristic zero. In this setting we give effective algorithms to compute the CSM class, Segre class and the Euler characteristic. The algorithms can be implemented using either symbolic or numerical methods. The algorithms are based on a new method for calculating the projective degrees of a rational map defined by a homogeneous ideal. Running time bounds are given for these algorithms and the algorithms are found to perform favourably compared to other applicable algorithms. Relations between our algorithms and other existing algorithms are explored. In the special case of a complete intersection subcheme we develop a second algorithm to compute CSM classes and Euler characteristics in a more direct and efficient manner. Each of these algorithms are generalized to subschemes of a product of projective spaces. Running time bounds for the generalized algorithms to compute the CSM class, Segre class and the Euler characteristic are given. Our Segre class algorithm is tested in comparison to another applicable algorithm and is found to perform favourably. To the best of our knowledge there are no other algorithms in the literature which compute the CSM class and Euler characteristic in the multi-projective setting. For complete simplical toric varieties defined by a fan we give a strictly combinatorial algorithm to compute the CSM class and Euler characteristic and a second combinatorial algorithm with reduced running time to compute only the Euler characteristic. We also prove several Bezout type bounds in multi-projective space. An application of these bounds to obtain a sharper degree bound on a certain system with a natural bi-projective structure is demonstrated

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Incremental Dead State Detection in Logarithmic Time

    Full text link
    Identifying live and dead states in an abstract transition system is a recurring problem in formal verification; for example, it arises in our recent work on efficiently deciding regex constraints in SMT. However, state-of-the-art graph algorithms for maintaining reachability information incrementally (that is, as states are visited and before the entire state space is explored) assume that new edges can be added from any state at any time, whereas in many applications, outgoing edges are added from each state as it is explored. To formalize the latter situation, we propose guided incremental digraphs (GIDs), incremental graphs which support labeling closed states (states which will not receive further outgoing edges). Our main result is that dead state detection in GIDs is solvable in O(logm)O(\log m) amortized time per edge for mm edges, improving upon O(m)O(\sqrt{m}) per edge due to Bender, Fineman, Gilbert, and Tarjan (BFGT) for general incremental directed graphs. We introduce two algorithms for GIDs: one establishing the logarithmic time bound, and a second algorithm to explore a lazy heuristics-based approach. To enable an apples-to-apples experimental comparison, we implemented both algorithms, two simpler baselines, and the state-of-the-art BFGT baseline using a common directed graph interface in Rust. Our evaluation shows 110110-530530x speedups over BFGT for the largest input graphs over a range of graph classes, random graphs, and graphs arising from regex benchmarks.Comment: 22 pages + reference

    Hardware optimizations of dense binary hyperdimensional computing: Rematerialization of hypervectors, binarized bundling, and combinational associative memory

    Get PDF
    Brain-inspired hyperdimensional (HD) computing models neural activity patterns of the very size of the brain's circuits with points of a hyperdimensional space, that is, with hypervectors. Hypervectors are Ddimensional (pseudo)random vectors with independent and identically distributed (i.i.d.) components constituting ultra-wide holographic words: D = 10,000 bits, for instance. At its very core, HD computing manipulates a set of seed hypervectors to build composite hypervectors representing objects of interest. It demands memory optimizations with simple operations for an efficient hardware realization. In this article, we propose hardware techniques for optimizations of HD computing, in a synthesizable open-source VHDL library, to enable co-located implementation of both learning and classification tasks on only a small portion of Xilinx UltraScale FPGAs: (1)We propose simple logical operations to rematerialize the hypervectors on the fly rather than loading them from memory. These operations massively reduce the memory footprint by directly computing the composite hypervectors whose individual seed hypervectors do not need to be stored in memory. (2) Bundling a series of hypervectors over time requires a multibit counter per every hypervector component. We instead propose a binarized back-to-back bundling without requiring any counters. This truly enables onchip learning with minimal resources as every hypervector component remains binary over the course of training to avoid otherwise multibit components. (3) For every classification event, an associative memory is in charge of finding the closest match between a set of learned hypervectors and a query hypervector by using a distance metric. This operator is proportional to hypervector dimension (D), and hence may take O(D) cycles per classification event. Accordingly, we significantly improve the throughput of classification by proposing associative memories that steadily reduce the latency of classification to the extreme of a single cycle. (4) We perform a design space exploration incorporating the proposed techniques on FPGAs for a wearable biosignal processing application as a case study. Our techniques achieve up to 2.39 7 area saving, or 2,337 7 throughput improvement. The Pareto optimal HD architecture is mapped on only 18,340 configurable logic blocks (CLBs) to learn and classify five hand gestures using four electromyography sensors
    corecore