19 research outputs found
Recommended from our members
Privacy-preserving decentralised collaborative applications
Cloud-based applications are problematic from a privacy perspective because they typically have access to large amounts of user data and metadata. This centralisation of user data creates an attractive target for actors such as criminals, suppressive governments, and companies selling the data. At the same time, the popularity of mobile and web applications has led to a growing amount of sensitive data being stored in the cloud.
This dissertation focuses on collaborative applications, such as Google Docs and Microsoft Office Online, where users currently rely on cloud-based solutions. It explores decentralised alternatives that allow the use of end-to-end encryption and anonymous communication systems to improve both information privacy and communication privacy.
One approach for a collaborative application to synchronise data in a privacy-preserving way is to use Tor hidden services, providing end-to-end encrypted communication, while also hiding collaborators’ identity. However, running Tor comes at a cost. We explore the costs of running a hidden service on a smartphone. Smartphones are nowadays the most frequently used computing devices, but they are also relatively resource-constrained. We build an empirical model of monthly cellular data traffic, and estimate a median 198 MiB for a typical user. We further estimate that the network activity would cost at least 9.6% of daily battery capacity on a Nexus One using 3G Internet. We explore four optimisations that, in combination, reduce the estimated median data cost to 61 MiB.
We also consider the security and privacy properties of decentralised collaborative applications, and explore a challenge that is introduced by a decentralised design – the lack of a trusted server guaranteeing consistency between collaborators. We present a novel snapshot protocol that ensures consistency, whilst allowing the past edit history to be hidden from new collaborators, and without relying on a consensus mechanism.
Lastly, we evaluate the overhead of the snapshot protocol by replaying editing histories from 270 Wikipedia articles, and demonstrate how its correctness and security properties are achieved. Assuming the number of collaborators remains small, the protocol is scalable in terms of CPU, memory, and network usage. It substantially reduces the amount of data transferred to a new collaborator compared to a basic protocol that transmits the full history. The computational cost is in the order of milliseconds per operation, indicating the protocol is suitable for applications where the rate of edits is relatively low.Funding was provided by Microsoft Research, The Boeing Company, and the Computer Laboratory
Cloud-edge hybrid applications
Many modern applications are designed to provide interactions among users, including multi-
user games, social networks and collaborative tools. Users expect application response time to
be in the order of milliseconds, to foster interaction and interactivity.
The design of these applications typically adopts a client-server model, where all interac-
tions are mediated by a centralized component. This approach introduces availability and fault-
tolerance issues, which can be mitigated by replicating the server component, and even relying on
geo-replicated solutions in cloud computing infrastructures. Even in this case, the client-server
communication model leads to unnecessary latency penalties for geographically close clients and
high operational costs for the application provider.
This dissertation proposes a cloud-edge hybrid model with secure and ecient propagation
and consistency mechanisms. This model combines client-side replication and client-to-client
propagation for providing low latency and minimizing the dependency on the server infras-
tructure, fostering availability and fault tolerance. To realize this model, this works makes the
following key contributions.
First, the cloud-edge hybrid model is materialized by a system design where clients maintain
replicas of the data and synchronize in a peer-to-peer fashion, and servers are used to assist
clients’ operation. We study how to bring most of the application logic to the client-side, us-
ing the centralized service primarily for durability, access control, discovery, and overcoming
internetwork limitations.
Second, we dene protocols for weakly consistent data replication, including a novel CRDT
model (∆-CRDTs). We provide a study on partial replication, exploring the challenges and
fundamental limitations in providing causal consistency, and the diculty in supporting client-
side replicas due to their ephemeral nature.
Third, we study how client misbehaviour can impact the guarantees of causal consistency.
We propose new secure weak consistency models for insecure settings, and algorithms to enforce
such consistency models.
The experimental evaluation of our contributions have shown their specic benets and
limitations compared with the state-of-the-art. In general, the cloud-edge hybrid model leads to
faster application response times, lower client-to-client latency, higher system scalability as fewer clients need to connect to servers at the same time, the possibility to work oine or disconnected
from the server, and reduced server bandwidth usage.
In summary, we propose a hybrid of cloud-and-edge which provides lower user-to-user la-
tency, availability under server disconnections, and improved server scalability – while being
ecient, reliable, and secure.Muitas aplicações modernas são criadas para fornecer interações entre utilizadores, incluindo
jogos multiutilizador, redes sociais e ferramentas colaborativas. Os utilizadores esperam que o
tempo de resposta nas aplicações seja da ordem de milissegundos, promovendo a interação e
interatividade.
A arquitetura dessas aplicações normalmente adota um modelo cliente-servidor, onde todas as
interações são mediadas por um componente centralizado. Essa abordagem apresenta problemas
de disponibilidade e tolerância a falhas, que podem ser mitigadas com replicação no componente
do servidor, até com a utilização de soluções replicadas geogracamente em infraestruturas de
computação na nuvem. Mesmo neste caso, o modelo de comunicação cliente-servidor leva a
penalidades de latência desnecessárias para clientes geogracamente próximos e altos custos
operacionais para o provedor das aplicações.
Esta dissertação propõe um modelo híbrido cloud-edge com mecanismos seguros e ecientes
de propagação e consistência. Esse modelo combina replicação do lado do cliente e propagação
de cliente para cliente para fornecer baixa latência e minimizar a dependência na infraestrutura
do servidor, promovendo a disponibilidade e tolerância a falhas. Para realizar este modelo, este
trabalho faz as seguintes contribuições principais.
Primeiro, o modelo híbrido cloud-edge é materializado por uma arquitetura do sistema em
que os clientes mantêm réplicas dos dados e sincronizam de maneira ponto a ponto e onde os
servidores são usados para auxiliar na operação dos clientes. Estudamos como trazer a maior
parte da lógica das aplicações para o lado do cliente, usando o serviço centralizado principalmente
para durabilidade, controlo de acesso, descoberta e superação das limitações inter-rede.
Em segundo lugar, denimos protocolos para replicação de dados fracamente consistentes,
incluindo um novo modelo de CRDTs (∆-CRDTs). Fornecemos um estudo sobre replicação parcial,
explorando os desaos e limitações fundamentais em fornecer consistência causal e a diculdade
em suportar réplicas do lado do cliente devido à sua natureza efémera.
Terceiro, estudamos como o mau comportamento da parte do cliente pode afetar as garantias
da consistência causal. Propomos novos modelos seguros de consistência fraca para congurações
inseguras e algoritmos para impor tais modelos de consistência.
A avaliação experimental das nossas contribuições mostrou os benefícios e limitações em comparação com o estado da arte. Em geral, o modelo híbrido cloud-edge leva a tempos de resposta
nas aplicações mais rápidos, a uma menor latência de cliente para cliente e à possibilidade de
trabalhar oine ou desconectado do servidor. Adicionalmente, obtemos uma maior escalabilidade
do sistema, visto que menos clientes precisam de estar conectados aos servidores ao mesmo tempo
e devido à redução na utilização da largura de banda no servidor.
Em resumo, propomos um modelo híbrido entre a orla (edge) e a nuvem (cloud) que fornece
menor latência entre utilizadores, disponibilidade durante desconexões do servidor e uma melhor
escalabilidade do servidor – ao mesmo tempo que é eciente, conável e seguro
Building fast and consistent (geo-)replicated systems : from principles to practice
Distributing data across replicas within a data center or across multiple data centers plays an important role in building Internet-scale services that provide a good user experience, namely low latency access and high throughput. This approach often compromises on strong consistency semantics, which helps maintain application-specific desired properties, namely, state convergence and invariant preservation. To relieve such inherent tension, in the past few years, many proposals have been designed to allow programmers to selectively weaken consistency levels of certain operations to avoid costly immediate coordination for concurrent user requests. However, these fail to provide principles to guide programmers to make a correct decision of assigning consistency levels to various operations so that good performance is extracted while the system behavior still complies with its specification.
The primary goal of this thesis work is to provide programmers with principles and tools for building fast and consistent (geo-) replicated systems by allowing programmers to think about various consistency levels in the same framework. The first step we took was to propose RedBlue consistency, which presents sufficient conditions that allow programmers to safely separate weakly consistent operations from strongly consistent ones in a coarse-grained manner. Second, to improve the practicality of RedBlue consistency, we built SIEVE - a tool that explores both Commutative Replicated Data Types and program analysis techniques to assign proper consistency levels to different operations and to maximize the weakly consistent operation space. Finally, we generalized the tradeoff between consistency and performance and proposed Partial Order-Restrictions consistency (or short, PoR consistency) - a generic consistency definition that captures various consistency levels in terms of visibility restrictions among pairs of operations and allows programmers to tune the restrictions to obtain a fine-grained control of their targeted consistency semantics.Daten auf mehrere Repliken in einem Datenzentrum oder über mehrere Datenzentren zu verteilen, nimmt einen hohen Stellenwert ein, um Internet-weite Services mit guter Nutzererfahrung, nsbesondere mit niedrigen Zugriffszeiten und hohem Datendurchsatz, zu implementieren. Diese Methode beeinträchtigt in der Regel die starke Konsitenzsemantik, die hilft gewünschte anwendungsspezifische Eigenschaften, die Zustandskonvergenz und Erhaltung von Invarianten, aufrechtzuerhalten. Um diesen Kompromiss zu mildern, wurde in den letzten Jahren mehrere Vorschläge entworfen, die es dem Programmierer ermöglichen für einzelne Operationen ein schwächeres Konsitenzlevel auszuwählen, um der aufwendigen Koordination paralleler Benutzeranfragen zu entgehen. Allerdings liefern diese Leitsätze für die Programmierer keine Lösungsansätze, wann welches Konsistenzlevel für eine Operation anzuwenden ist, so dass die höchstmögliche Leistung erreicht wird und gleichzeitig die Handlung des Systems die Spezifikation erfüllen.
Das Hauptziel dieser Doktorarbeit ist es Leitsätzen und Werkzeuge für Programmierer bereitzustellen, die die Entwicklung von leistungsstarken, konsistenten und (weltweit) replizierten Sytemen ermöglichen, in dem dem Programmierer mit Hilfe eines Frameworks gleichzeitig zwischen verschiedenen Konsistenzlevel wählen kann. Als ersten Schritt entwickelten wir RedBlue Konsistenz, welches die hinreichende Bedingungen erläutert, die es einem Programmierer erlauben zwischen schwacher Konsistenz und starker Konsistenz zu wählen. Um die Praktikabilität von RedBlue Konsistenz im zweiten Schritt weiter zu erhöhen, entwickelten wir SIEVE - ein Werkzeug, das sowohl kommutative, replizierte Datentypen und Programmanalyseverfahren verwendet, um den richtigen Konsistenzlevel zu verschiedenen Operationen zuzuordnen und dabei die schwach konsistenten Operationen zu maximieren. Abschliessend verallgemeinern wir den Kompromiss zwischen Konsistenz und Leistungsstärke und stellen die partiell, eingeschränkt geordnete Konsistenz vor (PoR Konsistenz) - eine generische Konsistenzdefinition, die verschiedene Konsistenz level, hinsichtlich der Einschränkung der Sichtbarkeit zwischen paaren von Operationen, umfasst und dem Programmierer erlaubt, die Einschränkungen zu justieren, um die gewünschte Konsistenzsemantik zu erzielen
Computer Aided Verification
The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency