2,644 research outputs found

    Using Flow Specifications of Parameterized Cache Coherence Protocols for Verifying Deadlock Freedom

    Full text link
    We consider the problem of verifying deadlock freedom for symmetric cache coherence protocols. In particular, we focus on a specific form of deadlock which is useful for the cache coherence protocol domain and consistent with the internal definition of deadlock in the Murphi model checker: we refer to this deadlock as a system- wide deadlock (s-deadlock). In s-deadlock, the entire system gets blocked and is unable to make any transition. Cache coherence protocols consist of N symmetric cache agents, where N is an unbounded parameter; thus the verification of s-deadlock freedom is naturally a parameterized verification problem. Parametrized verification techniques work by using sound abstractions to reduce the unbounded model to a bounded model. Efficient abstractions which work well for industrial scale protocols typically bound the model by replacing the state of most of the agents by an abstract environment, while keeping just one or two agents as is. However, leveraging such efficient abstractions becomes a challenge for s-deadlock: a violation of s-deadlock is a state in which the transitions of all of the unbounded number of agents cannot occur and so a simple abstraction like the one above will not preserve this violation. In this work we address this challenge by presenting a technique which leverages high-level information about the protocols, in the form of message sequence dia- grams referred to as flows, for constructing invariants that are collectively stronger than s-deadlock. Efficient abstractions can be constructed to verify these invariants. We successfully verify the German and Flash protocols using our technique

    Memory consistency directed cache coherence protocols for scalable multiprocessors

    Get PDF
    The memory consistency model, which formally specifies the behavior of the memory system, is used by programmers to reason about parallel programs. From a hardware design perspective, weaker consistency models permit various optimizations in a multiprocessor system: this thesis focuses on designing and optimizing the cache coherence protocol for a given target memory consistency model. Traditional directory coherence protocols are designed to be compatible with the strictest memory consistency model, sequential consistency (SC). When they are used for chip multiprocessors (CMPs) that provide more relaxed memory consistency models, such protocols turn out to be unnecessarily strict. Usually, this comes at the cost of scalability, in terms of per-core storage due to sharer tracking, which poses a problem with increasing number of cores in today’s CMPs, most of which no longer are sequentially consistent. The recent convergence towards programming language based relaxed memory consistency models has sparked renewed interest in lazy cache coherence protocols. These protocols exploit synchronization information by enforcing coherence only at synchronization boundaries via self-invalidation. As a result, such protocols do not require sharer tracking which benefits scalability. On the downside, such protocols are only readily applicable to a restricted set of consistency models, such as Release Consistency (RC), which expose synchronization information explicitly. In particular, existing architectures with stricter consistency models (such as x86) cannot readily make use of lazy coherence protocols without either: adapting the protocol to satisfy the stricter consistency model; or changing the architecture’s consistency model to (a variant of) RC, typically at the expense of backward compatibility. The first part of this thesis explores both these options, with a focus on a practical approach satisfying backward compatibility. Because of the wide adoption of Total Store Order (TSO) and its variants in x86 and SPARC processors, and existing parallel programs written for these architectures, we first propose TSO-CC, a lazy cache coherence protocol for the TSO memory consistency model. TSO-CC does not track sharers and instead relies on self-invalidation and detection of potential acquires (in the absence of explicit synchronization) using per cache line timestamps to efficiently and lazily satisfy the TSO memory consistency model. Our results show that TSO-CC achieves, on average, performance comparable to a MESI directory protocol, while TSO-CC’s storage overhead per cache line scales logarithmically with increasing core count. Next, we propose an approach for the x86-64 architecture, which is a compromise between retaining the original consistency model and using a more storage efficient lazy coherence protocol. First, we propose a mechanism to convey synchronization information via a simple ISA extension, while retaining backward compatibility with legacy codes and older microarchitectures. Second, we propose RC3 (based on TSOCC), a scalable cache coherence protocol for RCtso, the resulting memory consistency model. RC3 does not track sharers and relies on self-invalidation on acquires. To satisfy RCtso efficiently, the protocol reduces self-invalidations transitively using per-L1 timestamps only. RC3 outperforms a conventional lazy RC protocol by 12%, achieving performance comparable to a MESI directory protocol for RC optimized programs. RC3’s storage overhead per cache line scales logarithmically with increasing core count and reduces on-chip coherence storage overheads by 45% compared to TSO-CC. Finally, it is imperative that hardware adheres to the promised memory consistency model. Indeed, consistency directed coherence protocols cannot use conventional coherence definitions (e.g. SWMR) to be verified against, and few existing verification methodologies apply. Furthermore, as the full consistency model is used as a specification, their interaction with other components (e.g. pipeline) of a system must not be neglected in the verification process. Therefore, verifying a system with such protocols in the context of interacting components is even more important than before. One common way to do this is via executing tests, where specific threads of instruction sequences are generated and their executions are checked for adherence to the consistency model. It would be extremely beneficial to execute such tests under simulation, i.e. when the functional design implementation of the hardware is being prototyped. Most prior verification methodologies, however, target post-silicon environments, which when used for simulation-based memory consistency verification would be too slow. We propose McVerSi, a test generation framework for fast memory consistency verification of a full-system design implementation under simulation. Our primary contribution is a Genetic Programming (GP) based approach to memory consistency test generation, which relies on a novel crossover function that prioritizes memory operations contributing to non-determinism, thereby increasing the probability of uncovering memory consistency bugs. To guide tests towards exercising as much logic as possible, the simulator’s reported coverage is used as the fitness function. Furthermore, we increase test throughput by making the test workload simulation-aware. We evaluate our proposed framework using the Gem5 cycle accurate simulator in full-system mode with Ruby (with configurations that use Gem5’s MESI protocol, and our proposed TSO-CC together with an out-of-order pipeline). We discover 2 new bugs in the MESI protocol due to the faulty interaction of the pipeline and the cache coherence protocol, highlighting that even conventional protocols should be verified rigorously in the context of a full-system. Crucially, these bugs would not have been discovered through individual verification of the pipeline or the coherence protocol. We study 11 bugs in total. Our GP-based test generation approach finds all bugs consistently, therefore providing much higher guarantees compared to alternative approaches (pseudo-random test generation and litmus tests)

    Predicate Abstraction with Indexed Predicates

    Full text link
    Predicate abstraction provides a powerful tool for verifying properties of infinite-state systems using a combination of a decision procedure for a subset of first-order logic and symbolic methods originally developed for finite-state model checking. We consider models containing first-order state variables, where the system state includes mutable functions and predicates. Such a model can describe systems containing arbitrarily large memories, buffers, and arrays of identical processes. We describe a form of predicate abstraction that constructs a formula over a set of universally quantified variables to describe invariant properties of the first-order state variables. We provide a formal justification of the soundness of our approach and describe how it has been used to verify several hardware and software designs, including a directory-based cache coherence protocol.Comment: 27 pages, 4 figures, 1 table, short version appeared in International Conference on Verification, Model Checking and Abstract Interpretation (VMCAI'04), LNCS 2937, pages = 267--28

    Backward Reachability of Array-based Systems by SMT solving: Termination and Invariant Synthesis

    Full text link
    The safety of infinite state systems can be checked by a backward reachability procedure. For certain classes of systems, it is possible to prove the termination of the procedure and hence conclude the decidability of the safety problem. Although backward reachability is property-directed, it can unnecessarily explore (large) portions of the state space of a system which are not required to verify the safety property under consideration. To avoid this, invariants can be used to dramatically prune the search space. Indeed, the problem is to guess such appropriate invariants. In this paper, we present a fully declarative and symbolic approach to the mechanization of backward reachability of infinite state systems manipulating arrays by Satisfiability Modulo Theories solving. Theories are used to specify the topology and the data manipulated by the system. We identify sufficient conditions on the theories to ensure the termination of backward reachability and we show the completeness of a method for invariant synthesis (obtained as the dual of backward reachability), again, under suitable hypotheses on the theories. We also present a pragmatic approach to interleave invariant synthesis and backward reachability so that a fix-point for the set of backward reachable states is more easily obtained. Finally, we discuss heuristics that allow us to derive an implementation of the techniques in the model checker MCMT, showing remarkable speed-ups on a significant set of safety problems extracted from a variety of sources.Comment: Accepted for publication in Logical Methods in Computer Scienc

    A general compositional approach to verifying hierarchical cache coherence protocols

    Get PDF
    technical reportModern chip multiprocessor (CMP) cache coherence protocols are extremely complex and error prone to design. Modern symbolic methods are unable to provide much leverage for this class of examples. In [1], we presented a method to verify hierarchical and inclusive versions of these protocols using explicit state enumeration tools. We circumvented state explosion by employing a meta-circular assume/guarantee technique in which a designer can model check abstracted versions of the original protocol and claim that the real protocol is correct. The abstractions were justified in the same framework (hence the meta-circular approach). In this paper, we present how our work can be extended to hierarchical non-inclusive protocols which are inherently much harder to verify, both from the point of having more corner cases, and having insufficient information in higher levels of the protocol hierarchy to imply the sharing states of cache lines at lower levels. Two methods are proposed. The first requires more manual effort, but allows our technique in [1] to be applied unchanged, barring a guard strengthening expression that is computed based on state residing outside the cluster being abstracted. The second requires less manual effort, can scale to deeper hierarchies of protocol implementations, and uses history variables which are computed much more modularly. This method also relies on the meta-circular definition framework. A non-inclusive protocol that could not be completely model checked even after visiting 1.5 billion states was verified using two model checks of roughly 0.25 billion states each

    An interface aware guided search method for error-trace justification in large protocols

    Get PDF
    technical reportMany complex concurrent protocols that cannot be formally verified due to state explosion can often be formally verified by initially creating a collection of abstractions (overapproximations), and subsequently refining the overapproximated protocol in response to spurious counterexample traces. Such an approach crucially depends on the ability to check whether a given error trace in the abstract protocol corresponds to a concrete trace in the original protocol. Unfortunately, this checking step alone can be as as hard verifying the original protocol directly without abstractions, which is infeasible. Our approach tracks the interface behavior at the interfaces erected by our abstractions, and employs a few heuristic search methods based on a classification of the abstract system generating these traces. This collection of heuristic search methods form a tailor-made guided search strategy that works very efficiently in practice on three realistic multicore hierarchical cache coherence protocols. It could correctly analyze ?? ?? spurious error traces and genuine error scenarios, each within seconds. Also, on ?? of the ?? ?? of the spurious errors, our approach can precisely report which transition in the abstract protocol is overly approximated that leads to the spurious error

    Design tradeoffs for simplicity and efficient verification in the Execution Migration Machine

    Get PDF
    As transistor technology continues to scale, the architecture community has experienced exponential growth in design complexity and significantly increasing implementation and verification costs. Moreover, Moore's law has led to a ubiquitous trend of an increasing number of cores on a single chip. Often, these large-core-count chips provide a shared memory abstraction via directories and coherence protocols, which have become notoriously error-prone and difficult to verify because of subtle data races and state space explosion. Although a very simple hardware shared memory implementation can be achieved by simply not allowing ad-hoc data replication and relying on remote accesses for remotely cached data (i.e., requiring no directories or coherence protocols), such remote-access-based directoryless architectures cannot take advantage of any data locality, and therefore suffer in both performance and energy. Our recently taped-out 110-core shared-memory processor, the Execution Migration Machine (EM[superscript 2]), establishes a new design point. On the one hand, EM[superscript 2] supports shared memory but does not automatically replicate data, and thus preserves the simplicity of directoryless architectures. On the other hand, it significantly improves performance and energy over remote-access-only designs by exploiting data locality at remote cores via fast hardware-level thread migration. In this paper, we describe the design choices made in the EM[superscript 2] chip as well as our choice of design methodology, and discuss how they combine to achieve design simplicity and verification efficiency. Even though EM[superscript 2] is a fairly large design-110 cores using a total of 357 million transistors-the entire chip design and implementation process (RTL, verification, physical design, tapeout) took only 18 man-months
    • …
    corecore