6,020 research outputs found
Doing-it-All with Bounded Work and Communication
We consider the Do-All problem, where cooperating processors need to
complete similar and independent tasks in an adversarial setting. Here we
deal with a synchronous message passing system with processors that are subject
to crash failures. Efficiency of algorithms in this setting is measured in
terms of work complexity (also known as total available processor steps) and
communication complexity (total number of point-to-point messages). When work
and communication are considered to be comparable resources, then the overall
efficiency is meaningfully expressed in terms of effort defined as work +
communication. We develop and analyze a constructive algorithm that has work
and a nonconstructive
algorithm that has work . The latter result is close to the
lower bound on work. The effort of each of
these algorithms is proportional to its work when the number of crashes is
bounded above by , for some positive constant . We also present a
nonconstructive algorithm that has effort
Relativistic quantum cryptography
In this thesis we explore the benefits of relativistic constraints for
cryptography. We first revisit non-communicating models and its applications in
the context of interactive proofs and cryptography. We propose bit commitment
protocols whose security hinges on communication constraints and investigate
its limitations. We explain how some non-communicating models can be justified
by special relativity and study the limitations of such models. In particular,
we present a framework for analysing security of multiround relativistic
protocols. The second part of the thesis is dedicated to analysing specific
protocols. We start by considering a recently proposed two-round quantum bit
commitment protocol. We propose a fault-tolerant variant of the protocol,
present a complete security analysis and report on an experimental
implementation performed in collaboration with an experimental group at the
University of Geneva. We also propose a new, multiround classical bit
commitment protocol and prove its security against classical adversaries. This
demonstrates that in the classical world an arbitrarily long commitment can be
achieved even if the agents are restricted to occupy a finite region of space.
Moreover, the protocol is easy to implement and we report on an experiment
performed in collaboration with the Geneva group.Comment: 123 pages, 9 figures, many protocols, a couple of theorems, certainly
not enough commas. PhD thesis supervised by Stephanie Wehner at Centre for
Quantum Technologies, Singapor
SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning
Performing machine learning (ML) computation on private data while
maintaining data privacy, aka Privacy-preserving Machine Learning~(PPML), is an
emergent field of research. Recently, PPML has seen a visible shift towards the
adoption of the Secure Outsourced Computation~(SOC) paradigm due to the heavy
computation that it entails. In the SOC paradigm, computation is outsourced to
a set of powerful and specially equipped servers that provide service on a
pay-per-use basis. In this work, we propose SWIFT, a robust PPML framework for
a range of ML algorithms in SOC setting, that guarantees output delivery to the
users irrespective of any adversarial behaviour. Robustness, a highly desirable
feature, evokes user participation without the fear of denial of service.
At the heart of our framework lies a highly-efficient, maliciously-secure,
three-party computation (3PC) over rings that provides guaranteed output
delivery (GOD) in the honest-majority setting. To the best of our knowledge,
SWIFT is the first robust and efficient PPML framework in the 3PC setting.
SWIFT is as fast as (and is strictly better in some cases than) the best-known
3PC framework BLAZE (Patra et al. NDSS'20), which only achieves fairness. We
extend our 3PC framework for four parties (4PC). In this regime, SWIFT is as
fast as the best known fair 4PC framework Trident (Chaudhari et al. NDSS'20)
and twice faster than the best-known robust 4PC framework FLASH (Byali et al.
PETS'20).
We demonstrate our framework's practical relevance by benchmarking popular ML
algorithms such as Logistic Regression and deep Neural Networks such as VGG16
and LeNet, both over a 64-bit ring in a WAN setting. For deep NN, our results
testify to our claims that we provide improved security guarantee while
incurring no additional overhead for 3PC and obtaining 2x improvement for 4PC.Comment: This article is the full and extended version of an article to appear
in USENIX Security 202
Universally Composable Security With Local Adversaries
The traditional approach to formalizing ideal-model based definitions of security for multi-party protocols models adversaries (both real and ideal) as centralized entities that control all parties that deviate from the protocol. While this centralized-adversary modeling suffices for capturing basic security properties such as secrecy of local inputs and correctness of outputs against coordinated attacks, it turns out to be inadequate for capturing security properties that involve restricting the sharing of information between separate adversarial entities. Indeed, to capture collusion-freeness and and game-theoretic solution concepts, Alwen et.al. [Crypto, 2012] propose a new ideal-model based definitional framework that involves a de-centralized adversary.
We propose an alternative framework to that of Alwen et. al. We then observe that our framework allows capturing not only collusion-freeness and game-theoretic solution concepts, but also several other properties that involve the restriction of information flow among adversarial entities. These include some natural flavors of anonymity, deniability, timing separation, and information confinement. We also demonstrate the inability of existing formalisms to capture these properties.
We then prove strong composition properties for the proposed framework, and use these properties to demonstrate the security, within the new framework, of two very different protocols for securely evaluating any function of the parties’ inputs
A Survey on Routing in Anonymous Communication Protocols
The Internet has undergone dramatic changes in the past 15 years, and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, ranging from profiling of users for monetizing personal information to nearly omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. Several such systems have been proposed in the literature, each of which offers anonymity guarantees in different scenarios and under different assumptions, reflecting the plurality of approaches for how messages can be anonymously routed to their destination. Understanding this space of competing approaches with their different guarantees and assumptions is vital for users to understand the consequences of different design options. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. To this end, we provide a taxonomy for clustering all prevalently considered approaches (including Mixnets, DC-nets, onion routing, and DHT-based protocols) with respect to their unique routing characteristics, deployability, and performance. This, in particular, encompasses the topological structure of the underlying network; the routing information that has to be made available to the initiator of the conversation; the underlying communication model; and performance-related indicators such as latency and communication layer. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols, and it also helps to clarify the relationship between the routing characteristics of these protocols, and their performance and scalability
GhostMinion: A Strictness-Ordered Cache System for Spectre Mitigation
Out-of-order speculation, a technique ubiquitous since the early 1990s,
remains a fundamental security flaw. Via attacks such as Spectre and Meltdown,
an attacker can trick a victim, in an otherwise entirely correct program, into
leaking its secrets through the effects of misspeculated execution, in a way
that is entirely invisible to the programmer's model. This has serious
implications for application sandboxing and inter-process communication.
Designing efficient mitigations, that preserve the performance of
out-of-order execution, has been a challenge. The speculation-hiding techniques
in the literature have been shown to not close such channels comprehensively,
allowing adversaries to redesign attacks. Strong, precise guarantees are
necessary, but at the same time mitigations must achieve high performance to be
adopted. We present Strictness Ordering, a new constraint system that shows how
we can comprehensively eliminate transient side channel attacks, while still
allowing complex speculation and data forwarding between speculative
instructions. We then present GhostMinion, a cache modification built using a
variety of new techniques designed to provide Strictness Order at only 2.5%
overhead
- …