794 research outputs found
PaRiS: Causally Consistent Transactions with Non-blocking Reads and Partial Replication
Geo-replicated data platforms are at the backbone of several large-scale
online services. Transactional Causal Consistency (TCC) is an attractive
consistency level for building such platforms. TCC avoids many anomalies of
eventual consistency, eschews the synchronization costs of strong consistency,
and supports interactive read-write transactions. Partial replication is
another attractive design choice for building geo-replicated platforms, as it
increases the storage capacity and reduces update propagation costs. This paper
presents PaRiS, the first TCC system that supports partial replication and
implements non-blocking parallel read operations, whose latency is paramount
for the performance of read-intensive applications. PaRiS relies on a novel
protocol to track dependencies, called Universal Stable Time (UST). By means of
a lightweight background gossip process, UST identifies a snapshot of the data
that has been installed by every DC in the system. Hence, transactions can
consistently read from such a snapshot on any server in any replication site
without having to block. Moreover, PaRiS requires only one timestamp to track
dependencies and define transactional snapshots, thereby achieving resource
efficiency and scalability. We evaluate PaRiS on a large-scale AWS deployment
composed of up to 10 replication sites. We show that PaRiS scales well with the
number of DCs and partitions, while being able to handle larger data-sets than
existing solutions that assume full replication. We also demonstrate a
performance gain of non-blocking reads vs. a blocking alternative (up to 1.47x
higher throughput with 5.91x lower latency for read-dominated workloads and up
to 1.46x higher throughput with 20.56x lower latency for write-heavy
workloads)
A novel causally consistent replication protocol with partial geo-replication
Distributed storage systems are a fundamental component of large-scale Internet services.
To keep up with the increasing expectations of users regarding availability and latency,
the design of data storage systems has evolved to achieve these properties, by exploiting
techniques such as partial replication, geo-replication and weaker consistency models.
While systems with these characteristics exist, they usually do not provide all these
properties or do so in an inefficient manner, not taking full advantage of them. Additionally,
weak consistency models, such as eventual consistency, put an excessively high
burden on application programmers for writing correct applications, and hence, multiple
systems have moved towards providing additional consistency guarantees such as
implementing the causal (and causal+) consistency models.
In this thesis we approach the existing challenges in designing a causally consistent
replication protocol, with a focus on the use of geo and partial data replication. To this
end, we present a novel replication protocol, capable of enriching an existing geo and
partially replicated datastore with the causal+ consistency model.
In addition, this thesis also presents a concrete implementation of the proposed protocol
over the popular Cassandra datastore system. This implementation is complemented
with experimental results obtained in a realistic scenario, in which we compare our proposal
withmultiple configurations of the Cassandra datastore (without causal consistency
guarantees) and with other existing alternatives. The results show that our proposed solution
is able to achieve a balanced performance, with low data visibility delays and without
significant performance penalties
Cloud-edge hybrid applications
Many modern applications are designed to provide interactions among users, including multi-
user games, social networks and collaborative tools. Users expect application response time to
be in the order of milliseconds, to foster interaction and interactivity.
The design of these applications typically adopts a client-server model, where all interac-
tions are mediated by a centralized component. This approach introduces availability and fault-
tolerance issues, which can be mitigated by replicating the server component, and even relying on
geo-replicated solutions in cloud computing infrastructures. Even in this case, the client-server
communication model leads to unnecessary latency penalties for geographically close clients and
high operational costs for the application provider.
This dissertation proposes a cloud-edge hybrid model with secure and ecient propagation
and consistency mechanisms. This model combines client-side replication and client-to-client
propagation for providing low latency and minimizing the dependency on the server infras-
tructure, fostering availability and fault tolerance. To realize this model, this works makes the
following key contributions.
First, the cloud-edge hybrid model is materialized by a system design where clients maintain
replicas of the data and synchronize in a peer-to-peer fashion, and servers are used to assist
clients’ operation. We study how to bring most of the application logic to the client-side, us-
ing the centralized service primarily for durability, access control, discovery, and overcoming
internetwork limitations.
Second, we dene protocols for weakly consistent data replication, including a novel CRDT
model (∆-CRDTs). We provide a study on partial replication, exploring the challenges and
fundamental limitations in providing causal consistency, and the diculty in supporting client-
side replicas due to their ephemeral nature.
Third, we study how client misbehaviour can impact the guarantees of causal consistency.
We propose new secure weak consistency models for insecure settings, and algorithms to enforce
such consistency models.
The experimental evaluation of our contributions have shown their specic benets and
limitations compared with the state-of-the-art. In general, the cloud-edge hybrid model leads to
faster application response times, lower client-to-client latency, higher system scalability as fewer clients need to connect to servers at the same time, the possibility to work oine or disconnected
from the server, and reduced server bandwidth usage.
In summary, we propose a hybrid of cloud-and-edge which provides lower user-to-user la-
tency, availability under server disconnections, and improved server scalability – while being
ecient, reliable, and secure.Muitas aplicações modernas são criadas para fornecer interações entre utilizadores, incluindo
jogos multiutilizador, redes sociais e ferramentas colaborativas. Os utilizadores esperam que o
tempo de resposta nas aplicações seja da ordem de milissegundos, promovendo a interação e
interatividade.
A arquitetura dessas aplicações normalmente adota um modelo cliente-servidor, onde todas as
interações são mediadas por um componente centralizado. Essa abordagem apresenta problemas
de disponibilidade e tolerância a falhas, que podem ser mitigadas com replicação no componente
do servidor, até com a utilização de soluções replicadas geogracamente em infraestruturas de
computação na nuvem. Mesmo neste caso, o modelo de comunicação cliente-servidor leva a
penalidades de latência desnecessárias para clientes geogracamente próximos e altos custos
operacionais para o provedor das aplicações.
Esta dissertação propõe um modelo híbrido cloud-edge com mecanismos seguros e ecientes
de propagação e consistência. Esse modelo combina replicação do lado do cliente e propagação
de cliente para cliente para fornecer baixa latência e minimizar a dependência na infraestrutura
do servidor, promovendo a disponibilidade e tolerância a falhas. Para realizar este modelo, este
trabalho faz as seguintes contribuições principais.
Primeiro, o modelo híbrido cloud-edge é materializado por uma arquitetura do sistema em
que os clientes mantêm réplicas dos dados e sincronizam de maneira ponto a ponto e onde os
servidores são usados para auxiliar na operação dos clientes. Estudamos como trazer a maior
parte da lógica das aplicações para o lado do cliente, usando o serviço centralizado principalmente
para durabilidade, controlo de acesso, descoberta e superação das limitações inter-rede.
Em segundo lugar, denimos protocolos para replicação de dados fracamente consistentes,
incluindo um novo modelo de CRDTs (∆-CRDTs). Fornecemos um estudo sobre replicação parcial,
explorando os desaos e limitações fundamentais em fornecer consistência causal e a diculdade
em suportar réplicas do lado do cliente devido à sua natureza efémera.
Terceiro, estudamos como o mau comportamento da parte do cliente pode afetar as garantias
da consistência causal. Propomos novos modelos seguros de consistência fraca para congurações
inseguras e algoritmos para impor tais modelos de consistência.
A avaliação experimental das nossas contribuições mostrou os benefícios e limitações em comparação com o estado da arte. Em geral, o modelo híbrido cloud-edge leva a tempos de resposta
nas aplicações mais rápidos, a uma menor latência de cliente para cliente e à possibilidade de
trabalhar oine ou desconectado do servidor. Adicionalmente, obtemos uma maior escalabilidade
do sistema, visto que menos clientes precisam de estar conectados aos servidores ao mesmo tempo
e devido à redução na utilização da largura de banda no servidor.
Em resumo, propomos um modelo híbrido entre a orla (edge) e a nuvem (cloud) que fornece
menor latência entre utilizadores, disponibilidade durante desconexões do servidor e uma melhor
escalabilidade do servidor – ao mesmo tempo que é eciente, conável e seguro
Trade-offs in Replicated Systems
Replicated systems provide the foundation for most of today’s large-scale services. Engineering such replicated system is an onerous task. The first—and often foremost—step in this task is to establish an appropriate set of design goals, such as availability or performance, which should synthesize all the underlying system properties. Mixing design goals, however, is fraught with dangers, given that many properties are antagonistic and fundamental trade-offs exist among them. Navigating the harsh landscape of trade-offs is difficult because these formulations use different notations and system models, so it is hard to get an all-encompassing understanding of the state of the art in this area. In this paper, we address this difficulty by providing a systematic overview of the most relevant trade- offs involved in building replicated systems. Starting from the well-known FLP result, we follow a long line of research and investigate different trade-offs, assembling a coherent perspective of these results. Among others, we consider trade-offs which examine the complex interactions between properties such as consistency, availability, low latency, partition-tolerance, churn, scalability, and visibility latency
Achieving Causal Consistency under Partial Replication for Geo-distributed Cloud Storage
Causal consistency has emerged as an attractive middle-ground to architecting cloud storage systems, as it allows for high availability and low latency, while supporting stronger-than-eventual-consistency semantics. However, causally-consistent cloud storage systems have seen limited deployment in practice. A key factor is these systems employ full replication of all the data in all the data centers (DCs), incurring high cost. A simple extension of current causal systems to support partial replication by clustering DCs into rings incurs availability and latency problems. We propose Karma, the first system to enable causal consistency for partitioned data stores while achieving the cost advantages of partial replication without the availability and latency problems of the simple extension. Our evaluation with 64 servers emulating 8 geo-distributed DCs shows that Karma (i) incurs much lower cost than a fully-replicated causal store (obviously due to the lower replication factor); and (ii) offers higher availability and better performance than the above partial-replication extension at similar costs
DottedDB: anti-entropy without merkle trees, deletes without tombstones
To achieve high availability in the face of network partitions, many distributed databases adopt eventual consistency, allow temporary conflicts due to concurrent writes, and use some form of per-key logical clock to detect and resolve such conflicts. Furthermore, nodes synchronize periodically to ensure replica convergence in a process called anti-entropy, normally using Merkle Trees. We present the design of DottedDB, a Dynamo-like key-value store, which uses a novel node-wide logical clock framework, overcoming three fundamental limitations of the state of the art: (1) minimize the metadata per key necessary to track causality, avoiding its growth even in the face of node churn; (2) correctly and durably delete keys, with no need for tombstones; (3) offer a lightweight anti-entropy mechanism to converge replicated data, avoiding the need for Merkle Trees. We evaluate DottedDB against MerkleDB, an otherwise identical database, but using per-key logical clocks and Merkle Trees for anti-entropy, to precisely measure the impact of the novel approach. Results show that: causality metadata per object always converges rapidly to only one id-counter pair; distributed deletes are correctly achieved without global coordination and with constant metadata; divergent nodes are synchronized faster, with less memory-footprint and with less communication overhead than using Merkle Trees.This work is financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project
«POCI-01-0145-FEDER-006961», and by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia as part of project «UID/EEA/50014/2013».info:eu-repo/semantics/publishedVersio
- …