217,704 research outputs found

    Causal Consistent Databases

    Get PDF
    Many consistency criteria have been considered in databases and the causal consistency is one of them. The causal consistency model has gained much attention in recent years because it provides ordering of relative operations. The causal consistency requires that all writes, which are potentially causally related, must be seen in the same order by all processes. The causal consistency is a weaker criteria than the sequential consistency, because there exists an execution, which is causally consistent but not sequentially consistent, however all executions satisfying the sequential consistency are also causally consistent. Furthermore, the causal consistency supports non-blocking operations; i.e. processes may complete read or write operations without waiting for global computation. Therefore, the causal consistency overcomes the primary limit of stronger criteria: communication latency. Additionally, several application semantics are precisely captured by the causal consistency, e.g. collaborative tools. In this paper, we review the state-of-the-art of causal consistent databases, discuss the features, functionalities and applications of the causal consistency model, and systematically compare it with other consistency models. We also discuss the implementation of causal consistency databases and identify limitations of the causal consistency model

    Brief Announcement: Update Consistency in Partitionable Systems

    Get PDF
    Data replication is essential to ensure reliability, availability and fault-tolerance of massive distributed applications over large scale systems such as the Internet. However, these systems are prone to partitioning, which by Brewer's CAP theorem [1] makes it impossible to use a strong consistency criterion like atomicity. Eventual consistency [2] guaranties that all replicas eventually converge to a common state when the participants stop updating. However, it fails to fully specify shared objects and requires additional non-intuitive and error-prone distributed specification techniques, that must take into account all possible concurrent histories of updates to specify this common state [3]. This approach, that can lead to specifications as complicated as the implementations themselves, is limited by a more serious issue. The concurrent specification of objects uses the notion of concurrent events. In message-passing systems, two events are concurrent if they are enforced by different processes and each process enforced its event before it received the notification message from the other process. In other words, the notion of concurrency depends on the implementation of the object, not on its specification. Consequently, the final user may not know if two events are concurrent without explicitly tracking the messages exchanged by the processes. A specification should be independent of the system on which it is implemented. We believe that an object should be totally specified by two facets: its abstract data type, that characterizes its sequential executions, and a consistency criterion, that defines how it is supposed to behave in a distributed environment. Not only sequential specification helps repeal the problem of intention, it also allows to use the well studied and understood notions of languages and automata. This makes possible to apply all the tools developed for sequential systems, from their simple definition using structures and classes to the most advanced techniques like model checking and formal verification. Eventual consistency (EC) imposes no constraint on the convergent state, that very few depends on the sequential specification. For example, an implementation that ignores all the updates is eventually consistent, as all replicas converge to the initial state. We propose a new consistency criterion, update consistency (UC), in which the convergent state must be obtained by a total ordering of the updates, that contains the sequential order of eachComment: in DISC14 - 28th International Symposium on Distributed Computing, Oct 2014, Austin, United State

    Large NcN_c Expansion and the Parity Violating π,N,Δ\pi, N, \Delta Couplings

    Full text link
    In the limit of large NcN_c we first consider the NcN_c ordering of the various parity violating π,N,Δ\pi, N, \Delta couplings. Then we derive the relations among these couplings and consistency relations from the stability of these couplings under the chiral loop corrections with and without the mass splitting between NN and Δ\Delta. Especially we find that hΔ=−35hπh_\Delta =-{3\over \sqrt{5}}h_\pi in the large NcN_c limit, which correctly reproduces the relative sign and magnitude of the "DDH" values for these PV couplings

    Improving the Performance and Endurance of Persistent Memory with Loose-Ordering Consistency

    Full text link
    Persistent memory provides high-performance data persistence at main memory. Memory writes need to be performed in strict order to satisfy storage consistency requirements and enable correct recovery from system crashes. Unfortunately, adhering to such a strict order significantly degrades system performance and persistent memory endurance. This paper introduces a new mechanism, Loose-Ordering Consistency (LOC), that satisfies the ordering requirements at significantly lower performance and endurance loss. LOC consists of two key techniques. First, Eager Commit eliminates the need to perform a persistent commit record write within a transaction. We do so by ensuring that we can determine the status of all committed transactions during recovery by storing necessary metadata information statically with blocks of data written to memory. Second, Speculative Persistence relaxes the write ordering between transactions by allowing writes to be speculatively written to persistent memory. A speculative write is made visible to software only after its associated transaction commits. To enable this, our mechanism supports the tracking of committed transaction ID and multi-versioning in the CPU cache. Our evaluations show that LOC reduces the average performance overhead of memory persistence from 66.9% to 34.9% and the memory write traffic overhead from 17.1% to 3.4% on a variety of workloads.Comment: This paper has been accepted by IEEE Transactions on Parallel and Distributed System

    Takeuti's Well-ordering Proof: An Accessible Recontruction

    Get PDF
    G. Genzten’s 1938 proof of the consistency of pure arithmetic was hailed as a success for finitism and constructivism, but his proof requires induction along ordinal notations in Cantor normal form up to the first epsilon number, ε0. This left the task of giving a finitisically acceptable proof of the well-ordering of those ordinal notations, without which Gentzen’s proof could hardly be seen as a success for finitism. In his seminal book Proof Theory G. Takeuti provides such a proof. After a brief philosophical introduction, we provide a reconstruction of Takeuti’s proof including corrections, comments, re-organization and notational adjustments for the sake of clarity. The result is a much longer, but much more tractable proof of the well-ordering of ordinal notations in Cantor normal form less than ε0, that nevertheless follows Takeuti’s strategy closely. We end with some more general comments about that proof strategy and the notion of accessibility more generally
    • …
    corecore