316 research outputs found

    Modelling concurrent objects running on the TSO and ARMv8 memory models

    Get PDF
    Hardware weak memory models, such as TSO and ARM, are used to increase the performance of concurrent programs by allowing program instructions to be executed on the hardware in a different order to that specified by the software. This places a challenge on the verification of concurrent objects used in these programs since the variations in the executions need to be considered. Many approaches exist for verifying concurrent objects along with associated tool support. In particular, we focus on a thread-local approach to checking linearizability, the standard correctness condition for concurrent objects, using a model checker. This approach, like most others, does not support weak memory models. In order to reuse this existing approach, therefore, we show how to use the semantics of a weak memory model to directly derive a transition system of concurrent objects running under it. We do this for both TSO and the latest version of ARM, ARMv8. Since there is a straightforward implementation of TSO, we reflect this in our transition system which includes a buffer of writes to memory mirroring the store buffer of TSO. We illustrate linearizability checking using model checking on a transition system generated by this approach. The implementation of the significantly more complex ARMv8 architecture is less obvious. We derive our transition system in this case from an exisiting operational semantics that is consistent with the results of thousands of litmus test run on ARM hardware

    Defining correctness conditions for concurrent objects in multicore architectures

    Get PDF
    Correctness of concurrent objects is defined in terms of conditions that determine allowable relationships between histories of a concurrent object and those of the corresponding sequential object. Numerous correctness conditions have been proposed over the years, and more have been proposed recently as the algorithms implementing concurrent objects have been adapted to cope with multicore processors with relaxed memory architectures. We present a formal framework for defining correctness conditions for multicore architectures, covering both standard conditions for totally ordered memory and newer conditions for relaxed memory, which allows them to be expressed in uniform manner, simplifying comparison. Our framework distinguishes between order and commitment properties, which in turn enables a hierarchy of correctness conditions to be established. We consider the Total Store Order (TSO) memory model in detail, formalise known conditions for TSO using our framework, and develop sequentially consistent variations of these. We present a work-stealing deque for TSO memory that is not linearizable, but is correct with respect to these new conditions. Using our framework, we identify a new non-blocking compositional condition, fence consistency, which lies between known conditions for TSO, and aims to capture the intention of a programmer-specified fence

    Verifying linearizability on TSO architectures

    Get PDF
    Linearizability is the standard correctness criterion for fine-grained, non-atomic concurrent algorithms, and a variety of methods for verifying linearizability have been developed. However, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we define linearizability on a weak memory model: the TSO (Total Store Order) memory model, which is implemented in the x86 multicore architecture. We also show how a simulation-based proof method can be adapted to verify linearizability for algorithms running on TSO architectures. We demonstrate our approach on a typical concurrent algorithm, spinlock, and prove it linearizable using our simulation-based approach. Previous approaches to proving linearizabilty on TSO architectures have required a modification to the algorithm's natural abstract specification. Our proof method is the first, to our knowledge, for proving correctness without the need for such modification

    Admit your weakness: Verifying correctness on TSO architectures

    Get PDF
    “The final publication is available at http://link.springer.com/chapter/10.1007%2F978-3-319-15317-9_22 ”.Linearizability has become the standard correctness criterion for fine-grained non-atomic concurrent algorithms, however, most approaches assume a sequentially consistent memory model, which is not always realised in practice. In this paper we study the correctness of concurrent algorithms on a weak memory model: the TSO (Total Store Order) memory model, which is commonly implemented by multicore architectures. Here, linearizability is often too strict, and hence, we prove a weaker criterion, quiescent consistency instead. Like linearizability, quiescent consistency is compositional making it an ideal correctness criterion in a component-based context. We demonstrate how to model a typical concurrent algorithm, seqlock, and prove it quiescent consistent using a simulation-based approach. Previous approaches to proving correctness on TSO architectures have been based on linearizabilty which makes it necessary to modify the algorithm’s high-level requirements. Our approach is the first, to our knowledge, for proving correctness without the need for such a modification

    Maintaining consistency in distributed systems

    Get PDF
    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability

    Reasoning algebraically about refinement on TSO architectures

    Get PDF
    The Total Store Order memory model is widely implemented by modern multicore architectures such as x86, where local buffers are used for optimisation, allowing limited forms of instruction reordering. The presence of buffers and hardware-controlled buffer flushes increases the level of non-determinism from the level specified by a program, complicating the already difficult task of concurrent programming. This paper presents a new notion of refinement for weak memory models, based on the observation that pending writes to a process' local variables may be treated as if the effect of the update has already occurred in shared memory. We develop an interval-based model with algebraic rules for various programming constructs. In this framework, several decomposition rules for our new notion of refinement are developed. We apply our approach to verify the spinlock algorithm from the literature

    Concurrent Library Correctness on the TSO Memory Model

    Full text link
    Abstract. Linearizability is a commonly accepted notion of correctness for li-braries of concurrent algorithms. Unfortunately, it is only appropriate for se-quentially consistent memory models, while the hardware and software platforms that algorithms run on provide weaker consistency guarantees. In this paper, we present the first definition of linearizability on a weak memory model, Total Store Order (TSO), implemented by x86 processors. We establish that our definition is a correct one in the following sense: while proving a property of a client of a con-current library, we can soundly replace the library by its abstract implementation related to the original one by our generalisation of linearizability. This allows ab-stracting from the details of the library implementation while reasoning about the client. We have developed a tool for systematically testing concurrent libraries against our definition and applied it to several challenging algorithms.

    Finding linearization violations in lock-free concurrent data structures

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (page 31).Finding bugs in lock-free concurrent programs is hard. This is due in part to the difficulty of reasoning about the correctness of concurrent algorithms and the timing-sensitive nature of concurrent programs. One of the most widely used tools for reasoning about the correctness of concurrent algorithms is the linearization property. This thesis presents a tool for automatic dynamic checking of concurrent programs under the Total-Store-Order (TSO) memory model and a methodology for finding linearization violations automatically with the tool.by Sebastien Alberto Dabdoub.M. Eng

    Convergent types for shared memory

    Get PDF
    Dissertação de mestrado em Computer ScienceIt is well-known that consistency in shared memory concurrent programming comes with the price of degrading performance and scalability. Some of the existing solutions to this problem end up with high-level complexity and are not programmer friendly. We present a simple and well-defined approach to obtain relevant results for shared memory environments through relaxing synchronization. For that, we will look into Mergeable Data Types, data structures analogous to Conflict-Free Replicated Data Types but designed to perform in shared memory. CRDTs were the first formal approach engaging a solid theoretical study about eventual consistency on distributed systems, answering the CAP Theorem problem and providing high-availability. With CRDTs, updates are unsynchronized, and replicas eventually converge to a correct common state. However, CRDTs are not designed to perform in shared memory. In large-scale distributed systems the merge cost is negligible when compared to network mediated synchronization. Therefore, we have migrated the concept by developing the already existent Mergeable Data Types through formally defining a programming model that we named Global-Local View. Furthermore, we have created a portfolio of MDTs and demonstrated that in the appropriated scenarios we can largely benefit from the model.É bem sabido que para garantir coerência em programas concorrentes num ambiente de memória partilhada sacrifica-se performance e escalabilidade. Alguns dos métodos existentes para garantirem resultados significativos introduzem uma elevada complexidade e não são práticos. O nosso objetivo é o de garantir uma abordagem simples e bem definida de alcançar resultados notáveis em ambientes de memória partilhada, quando comparados com os métodos existentes, relaxando a coerência. Para tal, vamos analisar o conceito de Mergeable Data Type, estruturas análogas aos Conflict-Free Replicated Data Types mas concebidas para memória partilhada. CRDTs foram a primeira abordagem a desenvolver um estudo formal sobre eventual consistency, respondendo ao problema descrito no CAP Theorem e garantindo elevada disponibilidade. Com CRDTs os updates não são síncronos e as réplicas convergem eventualmente para um estado correto e comum. No entanto, não foram concebidos para atuar em memória partilhada. Em sistemas distribuídos de larga escala o custo da operação de merge é negligenciável quando comparado com a sincronização global. Portanto, migramos o conceito desenvolvendo os já existentes Mergeable Data Type através da criação de uma formalização de um modelo de programação ao qual chamamos de Global-Local View. Além do mais, criamos um portfolio de MDTs e demonstramos que nos cenários apropriados podemos beneficiar largamente do modelo
    corecore