108,130 research outputs found
Replication-Aware Linearizability
Geo-distributed systems often replicate data at multiple locations to achieve
availability and performance despite network partitions. These systems must
accept updates at any replica and propagate these updates asynchronously to
every other replica. Conflict-Free Replicated Data Types (CRDTs) provide a
principled approach to the problem of ensuring that replicas are eventually
consistent despite the asynchronous delivery of updates.
We address the problem of specifying and verifying CRDTs, introducing a new
correctness criterion called Replication-Aware Linearizability. This criterion
is inspired by linearizability, the de-facto correctness criterion for
(shared-memory) concurrent data structures. We argue that this criterion is
both simple to understand, and it fits most known implementations of CRDTs. We
provide a proof methodology to show that a CRDT satisfies replication-aware
linearizability which we apply on a wide range of implementations. Finally, we
show that our criterion can be leveraged to reason modularly about the
composition of CRDTs
Framework for Real-time collaboration on extensive Data Types using Strong Eventual Consistency
La collaboration en temps réel est un cas spécial de collaboration où les utilisateurs travaillent sur le même élément simultanément et sont au courant des modifications des autres utilisateurs en temps réel. Les données distribuées doivent rester disponibles et consistant tout en étant répartis sur plusieurs systèmes physiques. "Strong Consistency"
est une approche qui crée un ordre total des opérations en utilisant des mécanismes tel que le "locking". Cependant, cela introduit un "bottleneck". Ces dix dernières années, les algorithmes de concurrence ont été étudiés dans le but de garder la convergence de tous les replicas sans utiliser de "locking" ni de synchronisation. "Operational Trans-
formation" et "Conflict-free Replicated Data Types (CRDT)" sont utilisés dans ce but. Cependant, la complexité de ces stratégies les rend compliquées à intégrer dans des logicielles conséquents, comme les éditeurs de modèles, spécialement pour des data structures complexes comme les graphes. Les implémentations actuelles intègrent seulement des data linéaires tel que le texte. Dans ce mémoire, nous présentons CollabServer, un framework pour construire des environnements de collaboration. Il a une implémentation de CRDTs pour des data structures complexes tel que les graphes et donne la possibilité de construire ses propres data structures.Real-time collaboration is a special case of collaboration where users work on the same artefact simultaneously and are aware of each other’s changes in real-time. Shared data should remain available and consistent while dealing with its physically distributed
aspect. Strong Consistency is one approach that enforces a total order of operations using mechanisms, such as locking. This however introduces a bottleneck. In the last decade, algorithms for concurrency control have been studied to keep convergence of all replicas without locking or synchronization. Operational Transformation and Conflict free Replicated Data Types (CRDT) are widely used to achieve this purpose. However, the complexity of these strategies makes it hard to integrate in large software, such as modeling editors, especially for complex data types like graphs. Current implementations only integrate linear data, such as text. In this thesis, we present CollabServer, a framework to build collaborative environments. It features a CRDTs implementation for
complex data types such as graphs and gives possibility to build other data structures
Cache Equalizer: A Cache Pressure Aware Block Placement Scheme for Large-Scale Chip Multiprocessors
This paper describes Cache Equalizer (CE), a novel distributed cache management scheme for large scale chip multiprocessors (CMPs). Our work is motivated by large asymmetry in cache sets usages. CE decouples the physical locations of cache blocks from their addresses for the sake of reducing misses caused by destructive interferences. Temporal pressure at the on-chip last-level cache, is continuously collected at a group (comprised of cache sets) granularity, and periodically recorded at the memory controller to guide the placement process. An incoming block is consequently placed at a cache group that exhibits the minimum pressure. CE provides Quality of Service (QoS) by robustly offering better performance than the baseline shared NUCA cache. Simulation results using a full-system simulator demonstrate that CE outperforms shared NUCA caches by an average of 15.5% and by as much as 28.5% for the benchmark programs we examined. Furthermore, evaluations manifested the outperformance of CE versus related CMP cache designs
Update Consistency for Wait-free Concurrent Objects
In large scale systems such as the Internet, replicating data is an essential
feature in order to provide availability and fault-tolerance. Attiya and Welch
proved that using strong consistency criteria such as atomicity is costly as
each operation may need an execution time linear with the latency of the
communication network. Weaker consistency criteria like causal consistency and
PRAM consistency do not ensure convergence. The different replicas are not
guaranteed to converge towards a unique state. Eventual consistency guarantees
that all replicas eventually converge when the participants stop updating.
However, it fails to fully specify the semantics of the operations on shared
objects and requires additional non-intuitive and error-prone distributed
specification techniques. This paper introduces and formalizes a new
consistency criterion, called update consistency, that requires the state of a
replicated object to be consistent with a linearization of all the updates. In
other words, whereas atomicity imposes a linearization of all of the
operations, this criterion imposes this only on updates. Consequently some read
operations may return out-dated values. Update consistency is stronger than
eventual consistency, so we can replace eventually consistent objects with
update consistent ones in any program. Finally, we prove that update
consistency is universal, in the sense that any object can be implemented under
this criterion in a distributed system where any number of nodes may crash.Comment: appears in International Parallel and Distributed Processing
Symposium, May 2015, Hyderabad, Indi
Reconfigurable Lattice Agreement and Applications
Reconfiguration is one of the central mechanisms in distributed systems. Due to failures and connectivity disruptions, the very set of service replicas (or servers) and their roles in the computation may have to be reconfigured over time. To provide the desired level of consistency and availability to applications running on top of these servers, the clients of the service should be able to reach some form of agreement on the system configuration. We observe that this agreement is naturally captured via a lattice partial order on the system states. We propose an asynchronous implementation of reconfigurable lattice agreement that implies elegant reconfigurable versions of a large class of lattice abstract data types, such as max-registers and conflict detectors, as well as popular distributed programming abstractions, such as atomic snapshot and commit-adopt
- …