5,908 research outputs found

    Robustness Against Transactional Causal Consistency

    Get PDF
    Distributed storage systems and databases are widely used by various types of applications. Transactional access to these storage systems is an important abstraction allowing application programmers to consider blocks of actions (i.e., transactions) as executing atomically. For performance reasons, the consistency models implemented by modern databases are weaker than the standard serializability model, which corresponds to the atomicity abstraction of transactions executing over a sequentially consistent memory. Causal consistency for instance is one such model that is widely used in practice. In this paper, we investigate application-specific relationships between several variations of causal consistency and we address the issue of verifying automatically if a given transactional program is robust against causal consistency, i.e., all its behaviors when executed over an arbitrary causally consistent database are serializable. We show that programs without write-write races have the same set of behaviors under all these variations, and we show that checking robustness is polynomial time reducible to a state reachability problem in transactional programs over a sequentially consistent shared memory. A surprising corollary of the latter result is that causal consistency variations which admit incomparable sets of behaviors admit comparable sets of robust programs. This reduction also opens the door to leveraging existing methods and tools for the verification of concurrent programs (assuming sequential consistency) for reasoning about programs running over causally consistent databases. Furthermore, it allows to establish that the problem of checking robustness is decidable when the programs executed at different sites are finite-state

    Robustness against Consistency Models with Atomic Visibility

    Get PDF
    To achieve scalability, modern Internet services often rely on distributed databases with consistency models for transactions weaker than serializability. At present, application programmers often lack techniques to ensure that the weakness of these consistency models does not violate application correctness. We present criteria to check whether applications that rely on a database providing only weak consistency are robust, i.e., behave as if they used a database providing serializability. When this is the case, the application programmer can reap the scalability benefits of weak consistency while being able to easily check the desired correctness properties. Our results handle systematically and uniformly several recently proposed weak consistency models, as well as a mechanism for strengthening consistency in parts of an application

    Data consistency in transactional storage systems: a centralised approach.

    Get PDF
    We introduce an interleaving operational semantics for describing the client-observable behaviour of atomic transactions on distributed key-value stores. Our semantics builds on abstract states comprising centralised, global key-value stores and partial client views. We provide operational definitions of consistency models for our key-value stores which are shown to be equivalent to the well-known declarative definitions of consistency model for execution graphs. We explore two immediate applications of our semantics: specific protocols of geo-replicated databases (e.g. COPS) and partitioned databases (e.g. Clock-SI) can be shown to be correct for a specific consistency model by embedding them in our centralised semantics; programs can be directly shown to have invariant properties such as robustness results against a weak consistency model

    Una implementación rápida de Parallel Snapshot Isolation

    Get PDF
    Grado en Ingeniería Informática, Facultad de Informática UCM, Departamento de Arquitectura de Computadores y Automática, Curso 2019/2020.Most distributed database systems offer weak consistency models in order to avoid the performance penalty of coordinating replicas. Ideally, distributed databases would offer strong consistency models, like serialisability, since they make it easy to verify application invariants, and free programmers from worrying about concurrency. However, implementing and scaling systems with strong consistency is difficult, since it usually requires global communication. Weak models, while easier to scale, impose on the programmers the need to reason about possible anomalies, and the need to implement conflict resolution mechanisms in application code. Recently proposed consistency models, like Parallel Snapshot Isolation (PSI) and NonMonotonic Snapshot Isolation (NMSI), represent the strongest models that still allow to build scalable systems without global communication. They allow comparable performance to previous, weaker models, as well as similar abort rates. However, both models still provide weaker guarantees than serialisability, and may prove difficult to use in applications. This work shows an approach to bridge the gap between PSI, NMSI and strong consistency models like serialisability. It introduces and implements fastPSI, a consistency protocol that allows the user to selectively enforce serialisability for certain executions, while retaining the scalability properties of weaker consistency models like PSI and NMSI. In addition, it features a comprehensive evaluation of fastPSI in comparison with other consistency protocols, both weak and strong, showing that fastPSI offers better performance than serialisability, while retaining the scalability of weaker protocols.La mayoría de las bases de datos distribuidas ofrecen modelos de consistencia débil, con la finalidad de evitar la penalización de rendimiento que supone la coordinación de las distintas réplicas. Idealmente, las bases de datos distribuidas ofrecerían modelos de consistencia fuerte, como serialisability, ya que facilitan la verificación de los invariantes de las aplicaciones, y permiten que los programadores no deban preocuparse sobre posibles problemas de concurrencia. Sin embargo, implementar sistemas escalables que con modelos de consistencia fuerte no es fácil, pues requieren el uso de comunicación global. Sin embargo, aunque los modelos de consistencia más débiles permiten sistemas más escalables, imponen en los programadores la necesidad de razonar sobre posibles anomalías, así como implementar mecanismos de resolución de conflictos en el código de las aplicaciones. Dos modelos de consistencia propuestos recientemente, Parallel Snapshot Isolation (PSI) y Non-Monotonic Snapshot Isolation (NMSI), representan los modelos más fuertes que permiten implementaciones escalables sin necesidad de comunicación global. Permiten, a su vez, implementar sistemas con rendimientos similares a aquellos con modelos más débiles, a la vez que mantienen tasas de cancelación de transacciones similares. Aun así, ambos modelos no logran ofrecer las mismas garantías que serialisability, por lo que pueden ser difíciles de usar desde el punto de vista de las aplicaciones. Este trabajo presenta una propuesta que busca acortar la distancia entre modelos como PSI y NMSI y modelos fuertes como serialisability. Con esa finalidad, este trabajo presenta fastPSI, un protocolo de consistencia que permite al usuario ejecutar de manera selectiva transacciones serializables, reteniendo a su vez las propiedades de escalabilidad propias de modelos de consistencia débiles como PSI o NMSI. Además, este trabajo cuenta con una evaluación exhaustiva de fastPSI, comparándolo con otros protocolos de consistencia, tanto fuertes como débiles. Se muestra así que fastPSI logra un rendimiento mayor que serialisability sin por ello renunciar a la escalabilidad de protocolos más débiles.Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Algebraic Laws for Weak Consistency

    Get PDF
    Modern distributed systems often rely on so called weakly consistent databases, which achieve scalability by weakening consistency guarantees of distributed transaction processing. The semantics of such databases have been formalised in two different styles, one based on abstract executions and the other based on dependency graphs. The choice between these styles has been made according to intended applications. The former has been used for specifying and verifying the implementation of the databases, while the latter for proving properties of client programs of the databases. In this paper, we present a set of novel algebraic laws (inequalities) that connect these two styles of specifications. The laws relate binary relations used in a specification based on abstract executions to those used in a specification based on dependency graphs. We then show that this algebraic connection gives rise to so called robustness criteria: conditions which ensure that a client program of a weakly consistent database does not exhibit anomalous behaviours due to weak consistency. These criteria make it easy to reason about these client programs, and may become a basis for dynamic or static program analyses. For a certain class of consistency models specifications, we prove a full abstraction result that connects the two styles of specifications

    Dynamic Partial Order Reduction for Checking Correctness Against Transaction Isolation Levels

    Full text link
    Modern applications, such as social networking systems and e-commerce platforms are centered around using large-scale databases for storing and retrieving data. Accesses to the database are typically enclosed in transactions that allow computations on shared data to be isolated from other concurrent computations and resilient to failures. Modern databases trade isolation for performance. The weaker the isolation level is, the more behaviors a database is allowed to exhibit and it is up to the developer to ensure that their application can tolerate those behaviors. In this work, we propose stateless model checking algorithms for studying correctness of such applications that rely on dynamic partial order reduction. These algorithms work for a number of widely-used weak isolation levels, including Read Committed, Causal Consistency, Snapshot Isolation, and Serializability. We show that they are complete, sound and optimal, and run with polynomial memory consumption in all cases. We report on an implementation of these algorithms in the context of Java Pathfinder applied to a number of challenging applications drawn from the literature of distributed systems and databases.Comment: Submission to PLDI 202
    • …
    corecore