24 research outputs found

    Polynomial-Time Verification and Testing of Implementations of the Snapshot Data Structure

    Get PDF
    We analyze correctness of implementations of the snapshot data structure in terms of linearizability. We show that such implementations can be verified in polynomial time. Additionally, we identify a set of representative executions for testing and show that the correctness of each of these executions can be validated in linear time. These results present a significant speedup considering that verifying linearizability of implementations of concurrent data structures, in general, is EXPSPACE-complete in the number of program-states, and testing linearizability is NP-complete in the length of the tested execution. The crux of our approach is identifying a class of executions, which we call simple, such that a snapshot implementation is linearizable if and only if all of its simple executions are linearizable. We then divide all possible non-linearizable simple executions into three categories and construct a small automaton that recognizes each category. We describe two implementations (one for verification and one for testing) of an automata-based approach that we develop based on this result and an evaluation that demonstrates significant improvements over existing tools. For verification, we show that restricting a state-of-the-art tool to analyzing only simple executions saves resources and allows the analysis of more complex cases. Specifically, restricting attention to simple executions finds bugs in 27 instances, whereas, without this restriction, we were only able to find 14 of the 30 bugs in the instances we examined. We also show that our technique accelerates testing performance significantly. Specifically, our implementation solves the complete set of 900 problems we generated, whereas the state-of-the-art linearizability testing tool solves only 554 problems

    Affine Disjunctive Invariant Generation with Farkas' Lemma

    Full text link
    Invariant generation is the classical problem that aims at automated generation of assertions that over-approximates the set of reachable program states in a program. We consider the problem of generating affine invariants over affine while loops (i.e., loops with affine loop guards, conditional branches and assignment statements), and explore the automated generation of disjunctive affine invariants. Disjunctive invariants are an important class of invariants that capture disjunctive features in programs such as multiple phases, transitions between different modes, etc., and are typically more precise than conjunctive invariants over programs with these features. To generate tight affine invariants, existing constraint-solving approaches have investigated the application of Farkas' Lemma to conjunctive affine invariant generation, but none of them considers disjunctive affine invariants

    Mailbox Abstractions for Static Analysis of Actor Programs

    Get PDF
    Properties such as the absence of errors or bounds on mailbox sizes are hard to deduce statically for actor-based programs. This is because actor-based programs exhibit several sources of unboundedness, in addition to the non-determinism that is inherent to the concurrent execution of actors. We developed a static technique based on abstract interpretation to soundly reason in a finite amount of time about the possible executions of an actor-based program. We use our technique to statically verify the absence of errors in actor-based programs, and to compute upper bounds on the actors\u27 mailboxes. Sound abstraction of these mailboxes is crucial to the precision of any such technique. We provide several mailbox abstractions and categorize them according to the extent to which they preserve message ordering and multiplicity of messages in a mailbox. We formally prove the soundness of each mailbox abstraction, and empirically evaluate their precision and performance trade-offs on a corpus of benchmark programs. The results show that our technique can statically verify the absence of errors for more benchmark programs than the state-of-the-art analysis

    Partial Model Checking and Partial Model Synthesis in LTL Using a Tableau-Based Approach

    Get PDF
    In the process of designing a computer system S and checking whether an abstract model ? of S verifies a given specification property ?, one might have only a partial knowledge of the model, either because ? has not yet been completely defined (constructed) by the designer, or because it is not completely observable by the verifier. This leads to new verification problems, subsuming satisfiability and model checking as special cases. We state and discuss these problems in the case of LTL specifications, and develop a uniform tableau-based approach for their solutions

    The WCET Tool Challenge 2011

    Get PDF
    Following the successful WCET Tool Challenges in 2006 and 2008, the third event in this series was organized in 2011, again with support from the ARTIST DESIGN Network of Excellence. Following the practice established in the previous Challenges, the WCET Tool Challenge 2011 (WCC'11) defined two kinds of problems to be solved by the Challenge participants with their tools, WCET problems, which ask for bounds on the execution time, and flow-analysis problems, which ask for bounds on the number of times certain parts of the code can be executed. The benchmarks to be used in WCC'11 were debie1, PapaBench, and an industrial-strength application from the automotive domain provided by Daimler AG. Two default execution platforms were suggested to the participants, the ARM7 as "simple target'' and the MPC5553/5554 as a "complex target,'' but participants were free to use other platforms as well. Ten tools participated in WCC'11: aiT, Astr\'ee, Bound-T, FORTAS, METAMOC, OTAWA, SWEET, TimeWeaver, TuBound and WCA

    Modular Abstract Definitional Interpreters for WebAssembly

    Get PDF
    Even though static analyses can improve performance and secure programs against vulnerabilities, no static whole-program analyses exist for WebAssembly (Wasm) to date. Part of the reason is that Wasm has many complex language concerns, and it is not obvious how to adopt existing analysis frameworks for these features. This paper explores how abstract definitional interpretation can be used to develop sophisticated analyses for Wasm and other complex languages efficiently. In particular, we show that the semantics of Wasm can be decomposed into 19 language-independent components that abstract different aspects of Wasm. We have written a highly configurable definitional interpreter for full Wasm 1.0 in 1628 LOC against these components. Analysis developers can instantiate this interpreter with different value and effect abstractions to obtain abstract definitional interpreters that compute inter-procedural control and data-flow information. This way, we develop the first whole-program dead code, constant propagation, and taint analyses for Wasm, each in less than 210 LOC. We evaluate our analyses on 1458 Wasm binaries collected by others in the wild. Our implementation is based on a novel framework for definitional abstract interpretation in Scala that eliminates scalability issues of prior work

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Scalable Context-Sensitive Pointer Analysis for LLVM

    Get PDF
    Pointer analysis is indispensable for effectively verifying heap-manipulating programs. Even though it has been studied extensively, there are no publicly available pointer analyses for low-level languages that are moderately precise while scalable to large real-world programs. In this thesis, we show that existing context-sensitive unification-based pointer analyses suffer from the problem of oversharing – propagating too many abstract objects across the analysis of different procedures, which prevents them from scaling to large programs. We present a new pointer analysis for LLVM, called TeaDsa, with such an oversharing significantly reduced. We show how to further improve precision and speed of TeaDsa with extra contextual information, such as flow-sensitivity at call- and return-sites, and type information about memory accesses. We evaluate TeaDsa on the verification problem of detecting unsafe memory accesses and compare it against two state-of-the-art pointer analyses: SVF and SeaDsa. We show that TeaDsa is one order of magnitude faster than either SVF or SeaDsa, strictly more precise than SeaDsa, and, surprisingly, sometimes more precise than SVF
    corecore