201 research outputs found
Linearizability and Causality
Most work on the verification of concurrent objects for shared memory assumes sequential consistency, but most multicore processors support only weak memory models that do not provide sequential consistency. Furthermore, most verification efforts focus on the linearizability of concurrent objects, but there are existing implementations optimized to run on weak memory models that are not linearizable.
In this paper, we address these problems by introducing causal linearizability, a correctness condition for concurrent objects running on weak memory models. Like linearizability itself, causal linearizability enables concurrent objects to be composed, under weak constraints on the clientâs behaviour. We specify these constraints by introducing a notion of operation-race freedom, where programs that satisfy this property are guaranteed to behave as if their shared objects were in fact linearizable.
We apply these ideas to objects from the Linux kernel, optimized to run on TSO, the memory model of the x86 processor family
Maintaining consistency in distributed systems
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability
Fisheye Consistency: Keeping Data in Synch in a Georeplicated World
Over the last thirty years, numerous consistency conditions for replicated
data have been proposed and implemented. Popular examples of such conditions
include linearizability (or atomicity), sequential consistency, causal
consistency, and eventual consistency. These consistency conditions are usually
defined independently from the computing entities (nodes) that manipulate the
replicated data; i.e., they do not take into account how computing entities
might be linked to one another, or geographically distributed. To address this
lack, as a first contribution, this paper introduces the notion of proximity
graph between computing nodes. If two nodes are connected in this graph, their
operations must satisfy a strong consistency condition, while the operations
invoked by other nodes are allowed to satisfy a weaker condition. The second
contribution is the use of such a graph to provide a generic approach to the
hybridization of data consistency conditions into the same system. We
illustrate this approach on sequential consistency and causal consistency, and
present a model in which all data operations are causally consistent, while
operations by neighboring processes in the proximity graph are sequentially
consistent. The third contribution of the paper is the design and the proof of
a distributed algorithm based on this proximity graph, which combines
sequential consistency and causal consistency (the resulting condition is
called fisheye consistency). In doing so the paper not only extends the domain
of consistency conditions, but provides a generic provably correct solution of
direct relevance to modern georeplicated systems
On Verifying Causal Consistency
Causal consistency is one of the most adopted consistency criteria for
distributed implementations of data structures. It ensures that operations are
executed at all sites according to their causal precedence. We address the
issue of verifying automatically whether the executions of an implementation of
a data structure are causally consistent. We consider two problems: (1)
checking whether one single execution is causally consistent, which is relevant
for developing testing and bug finding algorithms, and (2) verifying whether
all the executions of an implementation are causally consistent.
We show that the first problem is NP-complete. This holds even for the
read-write memory abstraction, which is a building block of many modern
distributed systems. Indeed, such systems often store data in key-value stores,
which are instances of the read-write memory abstraction. Moreover, we prove
that, surprisingly, the second problem is undecidable, and again this holds
even for the read-write memory abstraction. However, we show that for the
read-write memory abstraction, these negative results can be circumvented if
the implementations are data independent, i.e., their behaviors do not depend
on the data values that are written or read at each moment, which is a
realistic assumption.Comment: extended version of POPL 201
- âŠ