36 research outputs found

    An optimized conflict-free replicated set

    Get PDF
    Eventual consistency of replicated data supports concurrent updates, reduces latency and improves fault tolerance, but forgoes strong consistency. Accordingly, several cloud computing platforms implement eventually-consistent data types. The set is a widespread and useful abstraction, and many replicated set designs have been proposed. We present a reasoning abstraction, permutation equivalence, that systematizes the characterization of the expected concurrency semantics of concurrent types. Under this framework we present one of the existing conflict-free replicated data types, Observed-Remove Set. Furthermore, in order to decrease the size of meta-data, we propose a new optimization to avoid tombstones. This approach that can be transposed to other data types, such as maps, graphs or sequences.Comment: No. RR-8083 (2012

    Why logical clocks are easy

    Get PDF
    Tracking causality should not be ignored. It is important in the design of many distributed algorithms. And not respecting causality can lead to strange behaviors for users. The most commonly used mechanisms for tracking causality, vector clocks and version vectors, are simply optimized representations of causal histories, which are easy to understand. By building on the notion of causal histories, users can begin to see the logic ehind these mechanisms, to identify how they differ, and even consider possible optimizations. When confronted with an unfamiliar causality tracking mechanism, or when trying to design a new system that requires it, readers should ask two simple questions, which events need tracking and how does the mechanism translate back to a simple causal history.We would like to thank Rodrigo Rodrigues, Marc Shapiro, Russell Brown, Sean Cribbs, and Justin Sheehy for their feedback. This work was partially supported by EU FP7 SyncFree project (609551) and FCT/MCT projects UID/CEC/04516/2013 and UID/EEA/50014/2013.info:eu-repo/semantics/publishedVersio

    Adaptive Consistency Guarantees for Large-Scale Replicated Services

    Full text link
    To maintain consistency, designers of replicated services have traditionally been forced to choose from either strong consistency guarantees or none at all. Realizing that a continuum between strong and optimistic consistencies is semantically meaningful for a broad range of network services, previous research has proposed a continuous consistency model for replicated services to support the tradeoff between the guaranteed consistency level, performance and availability. However, to meet changing application needs and to make the model useful for interactive users of large-scale replicated services, the adaptability and the swiftness of inconsistency resolution are important and challenging. This paper presents IDEA (an Infrastructure for DEtection-based Adaptive consistency guarantees) for adaptive consistency guarantees of large-scale, Internet-based replicated services. The main functions enabled by IDEA include quick inconsistency detection and resolution, consistency adaptation and quantified consistency level guarantees. Through experimentation on the Planet-Lab, IDEA is evaluated from two aspects: its adaptive consistency guarantees and its performance for inconsistency resolution. Results show that IDEA is able to provide consistency guarantees adaptive to user’s changing needs, and it achieves low delay for inconsistency resolution and incurs small communication overhead

    Timestamp-Based Approach for the Detection and Resolution of Mutual Conflicts in Distributed Systems

    Get PDF
    We present a timestamp based algorithm for the detection of both write-write and read-write conflicts for a single file in distributed systems during network partitions. Our algorithm allows operations to occur in different network partitions simultaneously. When the sites from different partitions merge, the algorithm detects and resolves both read-write and write-write conflicts without taking into account the semantics of the transactions. Once the conflicts have been detected some reconciliation steps for the resolution of conflicts have also been proposed. Our algorithm will be useful in real-time systems where timeliness of operations is more important than response time (delayed commit

    A Privacy-Aware Distributed Storage and Replication Middleware for Heterogeneous Computing Platform

    Get PDF
    Cloud computing is an emerging research area that has drawn considerable interest in recent years. However, the current infrastructure raises significant concerns about how to protect users\u27 privacy, in part due to that users are storing their data in the cloud vendors\u27 servers. In this paper, we address this challenge by proposing and implementing a novel middleware, called Uno, which separates the storage of physical data and their associated metadata. In our design, users\u27 physical data are stored locally on those devices under a user\u27s full control, while their metadata can be uploaded to the commercial cloud. To ensure the reliability of users\u27 data, we develop a novel fine-grained file replication algorithm that exploits both data access patterns and device state patterns. Based on a quantitative analysis of the data set from Rice University, this algorithm replicates data intelligently in different time slots, so that it can not only significantly improve data availability, but also achieve a satisfactory performance on load balancing and storage diversification. We implement the Uno system on a heterogeneous testbed composed of both host servers and mobile devices, and demonstrate the programmability of Uno through implementation and evaluation of two sample applications, Uno@Home and Uno@Sense

    About logical clocks for distributed systems

    Full text link
    corecore