5 research outputs found

    Timed consistency: unifying model of consistency protocols in distributed systems

    Get PDF
    Ordering and timeliness are two different aspects of consis- tency of shared objects in distributed systems. Timed consistency [12] is an approach that considers simultaneously these two elements according to the needs of the system. Hence, most of well known consistency proto- cols are candidates to be unified under the Timed consistency approach, just by changing some of the time or order parameters.Red de Universidades con Carreras en Informática (RedUNCI

    On the Consistency Conditions of Transactional Memories

    Get PDF
    The aim of a Software Transactional Memory (STM) is to discharge the programmers from the management of synchronization in multiprocess programs that access concurrent objects. To that end, a STM system provides the programmer with the concept of a transaction: each sequential process is decomposed into transactions, where a transaction encapsulates a piece of sequential code accessing concurrent objects. A transaction contains no explicit synchronization statement and appears as if it has been executed atomically. Due to the underlying concurrency management, a transaction commits or aborts. Up to now, few papers focused on the definition of consistency conditions suited to STM systems. One of them has recently proposed the opacity consistency condition. Opacity involves all the transactions (i.e., the committed plus the aborted transactions). It requires that (1) until it aborts (if ever it does) a transaction sees a consistent global state of the concurrent objects, and (2) the execution is linearizable (i.e., it could have been produced by a sequential execution -of the same transactions- that respects the real time order on the non-concurrent transactions). This paper is on consistency conditions for transactional memories. It first presents a framework that allows defining a space of consistency conditions whose extreme endpoints are serializability and opacity. It then extracts from this framework a new consistency condition that we call virtual world consistency. This condition ensures that (1) each transaction (committed or aborted) reads values from a consistent global state, (2) the consistent global states read by committed transactions are mutually consistent, but (3) the consistent global states read by aborted transactions are not required to be mutually consistent. Interestingly enough, this consistency condition can benefit lots of STM applications as, from its local point of view, a transaction cannot differentiate it from opacity. Finally, the paper presents and proves correct a STM algorithm that implements the virtual world consistency condition. Interestingly, this algorithm distinguishes the serialization date of a transaction from its commit date (thereby allowing more transactions to commit)

    Planetary Scale Data Storage

    Get PDF
    The success of virtualization and container-based application deployment has fundamentally changed computing infrastructure from dedicated hardware provisioning to on-demand, shared clouds of computational resources. One of the most interesting effects of this shift is the opportunity to localize applications in multiple geographies and support mobile users around the globe. With relatively few steps, an application and its data systems can be deployed and scaled across continents and oceans, leveraging the existing data centers of much larger cloud providers. The novelty and ease of a global computing context means that we are closer to the advent of an Oceanstore, an Internet-like revolution in personalized, persistent data that securely travels with its users. At a global scale, however, data systems suffer from physical limitations that significantly impact its consistency and performance. Even with modern telecommunications technology, the latency in communication from Brazil to Japan results in noticeable synchronization delays that violate user expectations. Moreover, the required scale of such systems means that failure is routine. To address these issues, we explore consistency in the implementation of distributed logs, key/value databases and file systems that are replicated across wide areas. At the core of our system is hierarchical consensus, a geographically-distributed consensus algorithm that provides strong consistency, fault tolerance, durability, and adaptability to varying user access patterns. Using hierarchical consensus as a backbone, we further extend our system from data centers to edge regions using federated consistency, an adaptive consistency model that gives satellite replicas high availability at a stronger global consistency than existing weak consistency models. In a deployment of 105 replicas in 15 geographic regions across 5 continents, we show that our implementation provides high throughput, strong consistency, and resiliency in the face of failure. From our experimental validation, we conclude that planetary-scale data storage systems can be implemented algorithmically without sacrificing consistency or performance
    corecore