136 research outputs found
Archiving the Relaxed Consistency Web
The historical, cultural, and intellectual importance of archiving the web
has been widely recognized. Today, all countries with high Internet penetration
rate have established high-profile archiving initiatives to crawl and archive
the fast-disappearing web content for long-term use. As web technologies
evolve, established web archiving techniques face challenges. This paper
focuses on the potential impact of the relaxed consistency web design on
crawler driven web archiving. Relaxed consistent websites may disseminate,
albeit ephemerally, inaccurate and even contradictory information. If captured
and preserved in the web archives as historical records, such information will
degrade the overall archival quality. To assess the extent of such quality
degradation, we build a simplified feed-following application and simulate its
operation with synthetic workloads. The results indicate that a non-trivial
portion of a relaxed consistency web archive may contain observable
inconsistency, and the inconsistency window may extend significantly longer
than that observed at the data store. We discuss the nature of such quality
degradation and propose a few possible remedies.Comment: 10 pages, 6 figures, CIKM 201
PaRiS: Causally Consistent Transactions with Non-blocking Reads and Partial Replication
Geo-replicated data platforms are at the backbone of several large-scale
online services. Transactional Causal Consistency (TCC) is an attractive
consistency level for building such platforms. TCC avoids many anomalies of
eventual consistency, eschews the synchronization costs of strong consistency,
and supports interactive read-write transactions. Partial replication is
another attractive design choice for building geo-replicated platforms, as it
increases the storage capacity and reduces update propagation costs. This paper
presents PaRiS, the first TCC system that supports partial replication and
implements non-blocking parallel read operations, whose latency is paramount
for the performance of read-intensive applications. PaRiS relies on a novel
protocol to track dependencies, called Universal Stable Time (UST). By means of
a lightweight background gossip process, UST identifies a snapshot of the data
that has been installed by every DC in the system. Hence, transactions can
consistently read from such a snapshot on any server in any replication site
without having to block. Moreover, PaRiS requires only one timestamp to track
dependencies and define transactional snapshots, thereby achieving resource
efficiency and scalability. We evaluate PaRiS on a large-scale AWS deployment
composed of up to 10 replication sites. We show that PaRiS scales well with the
number of DCs and partitions, while being able to handle larger data-sets than
existing solutions that assume full replication. We also demonstrate a
performance gain of non-blocking reads vs. a blocking alternative (up to 1.47x
higher throughput with 5.91x lower latency for read-dominated workloads and up
to 1.46x higher throughput with 20.56x lower latency for write-heavy
workloads)
CATS: linearizability and partition tolerance in scalable and self-organizing key-value stores
Distributed key-value stores provide scalable, fault-tolerant, and self-organizing
storage services, but fall short of guaranteeing linearizable consistency
in partially synchronous, lossy, partitionable, and dynamic networks, when data
is distributed and replicated automatically by the principle of consistent hashing.
This paper introduces consistent quorums as a solution for achieving atomic
consistency. We present the design and implementation of CATS, a distributed
key-value store which uses consistent quorums to guarantee linearizability and partition tolerance in such adverse and dynamic network conditions. CATS is
scalable, elastic, and self-organizing; key properties for modern cloud storage
middleware. Our system shows that consistency can be achieved with practical
performance and modest throughput overhead (5%) for read-intensive workloads
Remove-Win: a Design Framework for Conflict-free Replicated Data Collections
Internet-scale distributed systems often replicate data within and across
data centers to provide low latency and high availability despite node and
network failures. Replicas are required to accept updates without coordination
with each other, and the updates are then propagated asynchronously. This
brings the issue of conflict resolution among concurrent updates, which is
often challenging and error-prone. The Conflict-free Replicated Data Type
(CRDT) framework provides a principled approach to address this challenge.
This work focuses on a special type of CRDT, namely the Conflict-free
Replicated Data Collection (CRDC), e.g. list and queue. The CRDC can have
complex and compound data items, which are organized in structures of rich
semantics. Complex CRDCs can greatly ease the development of upper-layer
applications, but also makes the conflict resolution notoriously difficult.
This explains why existing CRDC designs are tricky, and hard to be generalized
to other data types. A design framework is in great need to guide the
systematic design of new CRDCs.
To address the challenges above, we propose the Remove-Win Design Framework.
The remove-win strategy for conflict resolution is simple but powerful. The
remove operation just wipes out the data item, no matter how complex the value
is. The user of the CRDC only needs to specify conflict resolution for
non-remove operations. This resolution is destructed to three basic cases and
are left as open terms in the CRDC design skeleton. Stubs containing
user-specified conflict resolution logics are plugged into the skeleton to
obtain concrete CRDC designs. We demonstrate the effectiveness of our design
framework via a case study of designing a conflict-free replicated priority
queue. Performance measurements also show the efficiency of the design derived
from our design framework.Comment: revised after submissio
Evaluating dotted version vectors in Riak
The NoSQL movement is rapidly increasing in importance, acceptance and usage in major (web) applications, that need the partition-tolerance and availability of the CAP theorem for scalability purposes, thus sacrificing the consistency side. With this approach, paradigms such as Eventual Consistency became more widespread. An eventual consistent system must handle data divergence and conflicts, that have to be carefully accounted for. Some systems have tried to use classic Version Vectors (VV) to track causality, but these reveal either scalability problems or loss of accuracy (when pruning is used to prevent vector growth).
Dotted Version Vectors (DVV) is a novel mechanism for dealing with data versioning in eventual consistent systems, that allows both accurate causality tracking and scalability both in the number of clients and servers, while limiting vector size to replication degree.
In this paper we describe briefly the challenges faced when incorporating DVV in Riak (a distributed key-value store), evaluate its behavior and performance, and discuss the advantages and disadvantages of this specific implementation
The Weakest Failure Detector for Eventual Consistency
In its classical form, a consistent replicated service requires all replicas
to witness the same evolution of the service state. Assuming a message-passing
environment with a majority of correct processes, the necessary and sufficient
information about failures for implementing a general state machine replication
scheme ensuring consistency is captured by the {\Omega} failure detector. This
paper shows that in such a message-passing environment, {\Omega} is also the
weakest failure detector to implement an eventually consistent replicated
service, where replicas are expected to agree on the evolution of the service
state only after some (a priori unknown) time. In fact, we show that {\Omega}
is the weakest to implement eventual consistency in any message-passing
environment, i.e., under any assumption on when and where failures might occur.
Ensuring (strong) consistency in any environment requires, in addition to
{\Omega}, the quorum failure detector {\Sigma}. Our paper thus captures, for
the first time, an exact computational difference be- tween building a
replicated state machine that ensures consistency and one that only ensures
eventual consistency
Event-Driven Network Programming
Software-defined networking (SDN) programs must simultaneously describe
static forwarding behavior and dynamic updates in response to events.
Event-driven updates are critical to get right, but difficult to implement
correctly due to the high degree of concurrency in networks. Existing SDN
platforms offer weak guarantees that can break application invariants, leading
to problems such as dropped packets, degraded performance, security violations,
etc. This paper introduces EVENT-DRIVEN CONSISTENT UPDATES that are guaranteed
to preserve well-defined behaviors when transitioning between configurations in
response to events. We propose NETWORK EVENT STRUCTURES (NESs) to model
constraints on updates, such as which events can be enabled simultaneously and
causal dependencies between events. We define an extension of the NetKAT
language with mutable state, give semantics to stateful programs using NESs,
and discuss provably-correct strategies for implementing NESs in SDNs. Finally,
we evaluate our approach empirically, demonstrating that it gives well-defined
consistency guarantees while avoiding expensive synchronization and packet
buffering
- …