9 research outputs found

    LIPIcs

    Get PDF
    Synchronous programs are easy to specify because the side effects of an operation are finished by the time the invocation of the operation returns to the caller. Asynchronous programs, on the other hand, are difficult to specify because there are side effects due to pending computation scheduled as a result of the invocation of an operation. They are also difficult to verify because of the large number of possible interleavings of concurrent computation threads. We present synchronization, a new proof rule that simplifies the verification of asynchronous programs by introducing the fiction, for proof purposes, that asynchronous operations complete synchronously. Synchronization summarizes an asynchronous computation as immediate atomic effect. Modular verification is enabled via pending asynchronous calls in atomic summaries, and a complementary proof rule that eliminates pending asynchronous calls when components and their specifications are composed. We evaluate synchronization in the context of a multi-layer refinement verification methodology on a collection of benchmark programs

    On the Completeness of Verifying Message Passing Programs Under Bounded Asynchrony

    Get PDF
    International audienceWe address the problem of verifying message passing programs , defined as a set of processes communicating through unbounded FIFO buffers. We introduce a bounded analysis that explores a special type of computations, called k-synchronous. These computations can be viewed as (unbounded) sequences of interaction phases, each phase allowing at most k send actions (by different processes), followed by a sequence of receives corresponding to sends in the same phase. We give a procedure for deciding k-synchronizability of a program, i.e., whether every computation is equivalent (has the same happens-before relation) to one of its k-synchronous computations. We show that reachability over k-synchronous computations and checking k-synchronizability are both PSPACE-complete

    Verifying distributed programs via canonical sequentialization

    No full text
    We introduce canonical sequentialization, a new approach to verifying unbounded, asynchronous, message-passing programs at compile-time. Our approach builds upon the following observation: due the combinatorial explosion in complexity, programmers do not reason about their systems by case-splitting over all the possible execution orders. Instead, correct programs tend to be well-structured so that the programmer can reason about a small number of representative executions, which we call the program’s canonical sequentialization. We have implemented our approach in a tool called Brisk that synthesizes canonical sequentializations for programs written in Haskell, and evaluated it on a wide variety of distributed systems including benchmarks from the literature and implementations of MapReduce, two-phase commit, and a version of the Disco distributed file-system. We show that unlike model checking, which gets prohibitively slow with just 10 processes Brisk verifies the unbounded versions of the benchmarks in tens of milliseconds, yielding the first concurrency verification tool that is fast enough to be integrated into a design-implement-check cycle

    IST Austria Thesis

    Get PDF
    Designing and verifying concurrent programs is a notoriously challenging, time consuming, and error prone task, even for experts. This is due to the sheer number of possible interleavings of a concurrent program, all of which have to be tracked and accounted for in a formal proof. Inventing an inductive invariant that captures all interleavings of a low-level implementation is theoretically possible, but practically intractable. We develop a refinement-based verification framework that provides mechanisms to simplify proof construction by decomposing the verification task into smaller subtasks. In a first line of work, we present a foundation for refinement reasoning over structured concurrent programs. We introduce layered concurrent programs as a compact notation to represent multi-layer refinement proofs. A layered concurrent program specifies a sequence of connected concurrent programs, from most concrete to most abstract, such that common parts of different programs are written exactly once. Each program in this sequence is expressed as structured concurrent program, i.e., a program over (potentially recursive) procedures, imperative control flow, gated atomic actions, structured parallelism, and asynchronous concurrency. This is in contrast to existing refinement-based verifiers, which represent concurrent systems as flat transition relations. We present a powerful refinement proof rule that decomposes refinement checking over structured programs into modular verification conditions. Refinement checking is supported by a new form of modular, parameterized invariants, called yield invariants, and a linear permission system to enhance local reasoning. In a second line of work, we present two new reduction-based program transformations that target asynchronous programs. These transformations reduce the number of interleavings that need to be considered, thus reducing the complexity of invariants. Synchronization simplifies the verification of asynchronous programs by introducing the fiction, for proof purposes, that asynchronous operations complete synchronously. Synchronization summarizes an asynchronous computation as immediate atomic effect. Inductive sequentialization establishes sequential reductions that captures every behavior of the original program up to reordering of coarse-grained commutative actions. A sequential reduction of a concurrent program is easy to reason about since it corresponds to a simple execution of the program in an idealized synchronous environment, where processes act in a fixed order and at the same speed. Our approach is implemented the CIVL verifier, which has been successfully used for the verification of several complex concurrent programs. In our methodology, the overall correctness of a program is established piecemeal by focusing on the invariant required for each refinement step separately. While the programmer does the creative work of specifying the chain of programs and the inductive invariant justifying each link in the chain, the tool automatically constructs the verification conditions underlying each refinement step
    corecore