15 research outputs found
Recommended from our members
A client-centric approach to transactional datastores
Modern applications must collect and store massive amounts of data. Cloud storage offers these applications simplicity: the abstraction of a failure-free, perfectly scalable black-box. While appealing, offloading data to the cloud is not without its challenges. These cloud storage systems often favour weaker levels of isolation and consistency. These weaker guarantees introduce behaviours that, without care, can break application logic. Offloading data to an untrusted third party like the cloud also raises questions of security and privacy. This thesis seeks to improve the performance, the semantics and the security of transactional cloud storage systems. It centers around a simple idea: defining consistency guarantees from the perspective of the applications that observe these guarantees, rather than from the perspective of the systems that implement them. This new perspective brings forth several benefits. First, it offers simpler and cleaner definitions of weak isolation and consistency guarantees. Second, it enables scalable implementations of existing guarantees like causal consistency. Finally, it has applications to security: it allows us to efficienctly augment transactional cloud storage systems with obliviousness guaranteesComputer Science
New Directions in Cloud Programming
Nearly twenty years after the launch of AWS, it remains difficult for most
developers to harness the enormous potential of the cloud. In this paper we lay
out an agenda for a new generation of cloud programming research aimed at
bringing research ideas to programmers in an evolutionary fashion. Key to our
approach is a separation of distributed programs into a PACT of four facets:
Program semantics, Availablity, Consistency and Targets of optimization. We
propose to migrate developers gradually to PACT programming by lifting familiar
code into our more declarative level of abstraction. We then propose a
multi-stage compiler that emits human-readable code at each stage that can be
hand-tuned by developers seeking more control. Our agenda raises numerous
research challenges across multiple areas including language design, query
optimization, transactions, distributed consistency, compilers and program
synthesis
Dissecting BFT Consensus: In Trusted Components we Trust!
The growing interest in reliable multi-party applications has fostered
widespread adoption of Byzantine Fault-Tolerant (BFT) consensus protocols.
Existing BFT protocols need f more replicas than Paxos-style protocols to
prevent equivocation attacks. Trust-BFT protocols instead seek to minimize this
cost by making use of trusted components at replicas. This paper makes two
contributions. First, we analyze the design of existing Trust-BFT protocols and
uncover three fundamental limitations that preclude most practical deployments.
Some of these limitations are fundamental, while others are linked to the state
of trusted components today. Second, we introduce a novel suite of consensus
protocols, FlexiTrust, that attempts to sidestep these issues. We show that our
FlexiTrust protocols achieve up to 185% more throughput than their Trust-BFT
counterparts
Optimizing the cloud? Don't train models. Build oracles!
We propose cloud oracles, an alternative to machine learning for online
optimization of cloud configurations. Our cloud oracle approach guarantees
complete accuracy and explainability of decisions for problems that can be
formulated as parametric convex optimizations. We give experimental evidence of
this technique's efficacy and share a vision of research directions for
expanding its applicability.Comment: Initial conference submission limited to 6 page
No-Commit Proofs: Defeating Livelock in BFT
This paper presents the design and evaluation of Wendy, the first Byzantine consensus protocol that achieves optimal latency (two phases), linear authenticator complexity, and optimistic responsiveness. Wendy\u27s core technical contribution is a novel aggregate signature scheme that allows leaders to prove, with constant pairing cost, that an operation did not commit. This No-commit proof addresses prior liveness concerns in protocols with linear authenticator complexity (including view change), allowing Wendy to commit operations in two-phases only
Snoopy: Surpassing the Scalability Bottleneck of Oblivious Storage
Existing oblivious storage systems provide strong security by hiding access patterns, but do not scale to sustain high throughput as they rely on a central point of coordination. To overcome this scalability bottleneck, we present Snoopy, an object store that is both oblivious and scalable such that adding more machines increases system throughput. Snoopy contributes techniques tailored to the high-throughput regime to securely distribute and efficiently parallelize every system component without prohibitive coordination costs. These techniques enable Snoopy to scale similarly to a plaintext storage system. Snoopy achieves 13.7x higher throughput than Obladi, a state-of-the-art oblivious storage system. Specifically, Obladi reaches a throughput of 6.7K requests/s for two million 160-byte objects and cannot scale beyond a proxy and server machine. For the same data size, Snoopy uses 18 machines to scale to 92K requests/s with average latency under 500ms
Optimizing Distributed Protocols with Query Rewrites [Technical Report]
Distributed protocols such as 2PC and Paxos lie at the core of many systems
in the cloud, but standard implementations do not scale. New scalable
distributed protocols are developed through careful analysis and rewrites, but
this process is ad hoc and error-prone. This paper presents an approach for
scaling any distributed protocol by applying rule-driven rewrites, borrowing
from query optimization. Distributed protocol rewrites entail a new burden:
reasoning about spatiotemporal correctness. We leverage order-insensitivity and
data dependency analysis to systematically identify correct coordination-free
scaling opportunities. We apply this analysis to create preconditions and
mechanisms for coordination-free decoupling and partitioning, two fundamental
vertical and horizontal scaling techniques. Manual rule-driven applications of
decoupling and partitioning improve the throughput of 2PC by and
Paxos by , and match state-of-the-art throughput in recent work. These
results point the way toward automated optimizers for distributed protocols
based on correct-by-construction rewrite rules.Comment: Technical report of paper accepted at SIGMOD 202
Scalable and private media consumption with Popcorn
We describe the design, implementation, and evaluation of Popcorn, a media delivery system that hides clients\u27 consumption (even from the content distributor). Popcorn relies on a powerful cryptographic primitive: private information retrieval (PIR). With novel refinements that leverage the properties of PIR protocols and media streaming, Popcorn scales to the size of Netflix\u27s library (8000 movies) and respects current controls on media dissemination. The dollar cost to serve a media object in Popcorn is 3.87 times that of a non-private system
Front Matter, Table of Contents, Preface, Conference Organization
Front Matter, Table of Contents, Preface, Conference Organizatio
OASIcs, Volume 101, FAB 2022, Complete Volume
OASIcs, Volume 101, FAB 2022, Complete Volum