7 research outputs found

    On the decidability of shared memory consistency verification

    Get PDF
    technical reportWe view shared memories as structures which define relations over the set of programs and their executions. An implementation is modeled by a transducer, where the relation it realizes is its language. This approach allows us to cast shared memory verification as language inclusion. We show that a specification can be approximated by an infinite hierarchy of finite-state transducers, called the memory model machines. Also, checking whether an execution is generated by a sequentially consistent memory is approached through a constraint satisfaction formulation. It is proved that if a memory implementation generates a non interleaved sequential and unambiguous execution, it necessarily generates one such execution of bounded size. Our paper summarizes the key results from the first author?s dissertation, and may help a practitioner understand with clarity what ?sequential consistency checking is undecidable? means

    On Execution-Based Formalization for Sequential Consistency Verification

    Get PDF
    Abstract The adequacy of execution based formalizations in modeling and verifying shared memory systems is discussed. It is argued that unclear sentence.... not comparable. We claim that the decidability problem of sequential consistency for finite-state systems is an open problem, in contrast with the result o

    Memory Consistency Models

    Get PDF
    Abstract: The memory consistency model for a shared-memory multiprocessor specifies the behaviour of memory with respect to read and write operations from multiple processors. Relaxed models that impose fewer memory ordering constraints offer the potential for higher performance by allowing hardware and software to overlap and reorder memory operations. Many of the previously proposed models either fail to provide reasonable programming semantics or are biased toward programming ease at the cost of sacrificing performance. The optimizations enabled by relaxed models are extremely effective in hiding virtually the full latency of writes in architectures with blocking reads. We evaluate all the consistency models and the comparison for the weak consistency model and release consistency model, the performance benefits of exploiting relaxed models based on detailed simulations of realistic parallel applications. We believe that the combined benefits in hardware and software will make relaxed models universal in future multiprocessors, as is already evidenced by their adoption in several commercial systems

    Store Atomicity for Transactional Memory

    Get PDF
    AbstractWe extend the notion of Store Atomicity [Arvind and Jan-Willem Maessen. Memory model = instruction reordering + store atomicity. In ISCA '06: Proceedings of the 33rd annual International Symposium on Computer Architecture, 2006] to a system with atomic transactional memory. This gives a fine-grained graph-based framework for defining and reasoning about transactional memory consistency. The memory model is defined in terms of thread-local Instruction Reordering axioms and Store Atomicity, which describes inter-thread communication via memory. A memory model with Store Atomicity is serializable: there is a unique global interleaving of all operations which respects the reordering rules and serializes all the operations in a transaction together. We extend Store Atomicity to capture this ordering requirement by requiring dependencies which cross a transaction boundary to point in to the initiating instruction or out from the committing instruction. We sketch a weaker definition of transactional serialization which accounts for the ability to interleave transactional operations which touch disjoint memory. We give a procedure for enumerating the behaviors of a transactional program—noting that a safe enumeration procedure permits only one transaction to read from memory at a time. We show that more realistic models of transactional execution require speculative execution. We define the conditions under which speculation must be rolled back, and give criteria to identify which instructions must be rolled back in these cases

    On model checking data-independent systems with arrays without reset

    Full text link
    A system is data-independent with respect to a data type X iff the operations it can perform on values of type X are restricted to just equality testing. The system may also store, input and output values of type X. We study model checking of systems which are data-independent with respect to two distinct type variables X and Y, and may in addition use arrays with indices from X and values from Y . Our main interest is the following parameterised model-checking problem: whether a given program satisfies a given temporal-logic formula for all non-empty nite instances of X and Y . Initially, we consider instead the abstraction where X and Y are infinite and where partial functions with finite domains are used to model arrays. Using a translation to data-independent systems without arrays, we show that the u-calculus model-checking problem is decidable for these systems. From this result, we can deduce properties of all systems with finite instances of X and Y . We show that there is a procedure for the above parameterised model-checking problem of the universal fragment of the u-calculus, such that it always terminates but may give false negatives. We also deduce that the parameterised model-checking problem of the universal disjunction-free fragment of the u-calculus is decidable. Practical motivations for model checking data-independent systems with arrays include verification of memory and cache systems, where X is the type of memory addresses, and Y the type of storable values. As an example we verify a fault-tolerant memory interface over a set of unreliable memories.Comment: Appeared in Theory and Practice of Logic Programming, vol. 4, no. 5&6, 200

    How Hard is Weak-Memory Testing?

    Full text link
    Weak-memory models are standard formal specifications of concurrency across hardware, programming languages, and distributed systems. A fundamental computational problem is consistency testing: is the observed execution of a concurrent program in alignment with the specification of the underlying system? The problem has been studied extensively across Sequential Consistency (SC) and weak memory, and proven to be NP-complete when some aspect of the input (e.g., number of threads/memory locations) is unbounded. This unboundedness has left a natural question open: are there efficient parameterized algorithms for testing? The main contribution of this paper is a deep hardness result for consistency testing under many popular weak-memory models: the problem remains NP-complete even in its bounded setting, where candidate executions contain a bounded number of threads, memory locations, and values. This hardness spreads across several Release-Acquire variants of C11, a popular variant of its Relaxed fragment, popular Causal Consistency models, and the POWER architecture. To our knowledge, this is the first result that fully exposes the hardness of weak-memory testing and proves that the problem admits no parameterization under standard input parameters. It also yields a computational separation of these models from SC, x86-TSO, PSO, and Relaxed, for which bounded consistency testing is either known (for SC), or shown here (for the rest), to be in polynomial time
    corecore