267 research outputs found
Information-Theoretic Privacy in Verifiable Outsourced Computation
Today, it is common practice to outsource time-consuming computations to the cloud.
Using the cloud allows anyone to process large quantities of data without having to invest in the necessary hardware,
significantly lowering cost requirements.
In this thesis we will consider the following work flow for outsourced computations:
A data owner uploads data to a server.
The server then computes some function on the data and sends the result to a third entity, which we call verifier.
In this scenario, two fundamental security challenges arise.
A malicious server may not perform the computation correctly, leading to an incorrect result.
Verifiability allows for the detection of such results.
In order for this to be practical, the verification procedure needs to be efficient.
The other major challenge is privacy.
If sensitive data, for example medical data is processed it is important to prevent unauthorized access to such sensitive information.
Particularly sensitive data has to be kept confidential even in the long term.
The field of verifiable computing provides solutions for the first challenge.
In this scenario, the verifier can check that the result that was given was computed correctly.
However, simultaneously addressing privacy leads to new challenges.
In the scenario of outsourced computation, privacy comes in different flavors.
One is privacy with respect to the server, where the goal is to prevent the server from learning about the data processed.
The other is privacy with respect to the verifier.
Without using verifiable computation the verifier obviously has less information about the original data than the data owner - it only knows the output of the computation but not the input to the computation.
If this third party verifier however, is given additional cryptographic data to verify the result of the computation, it might use this additional information to learn information about the inputs.
To prevent that a different privacy property we call privacy with respect to the verifier is required.
Finally, particularly sensitive data has to be kept confidential even in the long term, when computational privacy is not suitable any more.
Thus, information-theoretic measures are required.
These measures offer protection even against computationally unbounded adversaries.
Two well-known approaches to these challenges are homomorphic commitments and homomorphic authenticators.
Homomorphic commitments can provide even information-theoretic privacy, thus addressing long-term security, but verification is computationally expensive.
Homomorphic authenticators on the other hand can provide efficient verification, but do not provide information-theoretic privacy.
This thesis provides solutions to these research challenges -- efficient verifiability, input-output privacy and in particular information-theoretic privacy.
We introduce a new classification for privacy properties in verifiable computing.
We propose function-dependent commitment, a novel framework which combines the advantages of homomorphic commitments and authenticators with respect to verifiability and privacy.
We present several novel homomorphic signature schemes that can be used to solve verifiability and already address privacy with respect to the verifier.
In particular we construct one such scheme fine-tailored towards multivariate polynomials of degree two
as well as another fine-tailored towards linear functions over multi-sourced data.
The latter solution provides efficient verifiability even for computations over data authenticated by different cryptographic keys.
Furthermore, we provide transformations for homomorphic signatures that add privacy.
We first show how to add computational privacy and later on even information-theoretic privacy.
In this way, we turn homomorphic signatures into function-dependent commitments.
By applying this transformation to our homomorphic signature schemes we construct verifiable computing schemes with information-theoretic privacy
Combiners for Functional Encryption, Unconditionally
Functional encryption (FE) combiners allow one to combine many candidates for a functional encryption scheme, possibly based on different computational assumptions, into another functional encryption candidate with the guarantee that the resulting candidate is secure as long as at least one of the original candidates is secure. The fundamental question in this area is whether FE combiners exist.
There have been a series of works (Ananth et. al. (CRYPTO \u2716), Ananth-Jain-Sahai (EUROCRYPT \u2717), Ananth et. al (TCC \u2719)) on constructing FE combiners from various assumptions.
We give the first unconditional construction of combiners for functional encryption, resolving this question completely. Our construction immediately implies an unconditional universal functional encryption scheme, an FE scheme that is secure if such an FE scheme exists. Previously such results either relied on algebraic assumptions or required subexponential security assumptions
On Foundations of Protecting Computations
Information technology systems have become indispensable to uphold our
way of living, our economy and our safety. Failure of these systems can have
devastating effects. Consequently, securing these systems against malicious
intentions deserves our utmost attention.
Cryptography provides the necessary foundations for that purpose. In
particular, it provides a set of building blocks which allow to secure larger
information systems. Furthermore, cryptography develops concepts and tech-
niques towards realizing these building blocks. The protection of computations
is one invaluable concept for cryptography which paves the way towards
realizing a multitude of cryptographic tools. In this thesis, we contribute to
this concept of protecting computations in several ways.
Protecting computations of probabilistic programs. An indis-
tinguishability obfuscator (IO) compiles (deterministic) code such that it
becomes provably unintelligible. This can be viewed as the ultimate way
to protect (deterministic) computations. Due to very recent research, such
obfuscators enjoy plausible candidate constructions.
In certain settings, however, it is necessary to protect probabilistic com-
putations. The only known construction of an obfuscator for probabilistic
programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and
requires an indistinguishability obfuscator which satisfies extreme security
guarantees. We improve this construction and thereby reduce the require-
ments on the security of the underlying indistinguishability obfuscator.
(Agrikola, Couteau, and Hofheinz, PKC, 2020)
Protecting computations in cryptographic groups. To facilitate
the analysis of building blocks which are based on cryptographic groups,
these groups are often overidealized such that computations in the group
are protected from the outside. Using such overidealizations allows to prove
building blocks secure which are sometimes beyond the reach of standard
model techniques. However, these overidealizations are subject to certain
impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018
introduced the algebraic group model (AGM) as a relaxation which is closer
to the standard model but in several aspects preserves the power of said
overidealizations. However, their model still suffers from implausibilities.
We develop a framework which allows to transport several security proofs
from the AGM into the standard model, thereby evading the above implausi-
bility results, and instantiate this framework using an indistinguishability
obfuscator.
(Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020)
Protecting computations using compression. Perfect compression
algorithms admit the property that the compressed distribution is truly
random leaving no room for any further compression. This property is
invaluable for several cryptographic applications such as “honey encryption”
or password-authenticated key exchange. However, perfect compression
algorithms only exist for a very small number of distributions. We relax the
notion of compression and rigorously study the resulting notion which we
call “pseudorandom encodings”. As a result, we identify various surprising
connections between seemingly unrelated areas of cryptography. Particularly,
we derive novel results for adaptively secure multi-party computation which
allows for protecting computations in distributed settings. Furthermore, we
instantiate the weakest version of pseudorandom encodings which suffices
for adaptively secure multi-party computation using an indistinguishability
obfuscator.
(Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020
How to Avoid Obfuscation Using Witness PRFs
We propose a new cryptographic primitive called \emph{witness pseudorandom functions} (witness PRFs). Witness PRFs are related to witness encryption, but appear strictly stronger: we show that witness PRFs can be used for applications such as multi-party key exchange without trsuted setup, polynomially-many hardcore bits for any one-way function, and several others that were previously only possible using obfuscation. Current candidate obfuscators are far from practical and typically rely on unnatural hardness assumptions about multilinear maps. We give a construction of witness PRFs from multilinear maps that is simpler and much more efficient than current obfuscation candidates, thus bringing several applications of obfuscation closer to practice. Our construction relies on new but very natural hardness assumptions about the underlying maps that appear to be resistant to a recent line of attacks
From FE Combiners to Secure MPC and Back
Functional encryption (FE) has incredible applications towards computing on encrypted data. However, constructing the most general form of this primitive has remained elusive. Although some candidate constructions exist, they rely on nonstandard assumptions, and thus, their security has been questioned. An FE combiner attempts to make use of these candidates while minimizing the trust placed on any individual FE candidate. Informally, an FE combiner takes in a set of FE candidates and outputs a secure FE scheme if at least one of the candidates is secure.
Another fundamental area in cryptography is secure multi-party computation (MPC), which has been extensively studied for several decades. In this work, we initiate a formal study of the relationship between functional encryption (FE) combiners and secure multi-party computation (MPC). In particular, we show implications in both directions between these primitives. As a consequence of these implications, we obtain the following main results.
1) A two round semi-honest MPC protocol in the plain model secure against up to (n-1) corruptions with communication complexity proportional only to the depth of the circuit being computed assuming LWE. Prior two round protocols that achieved this communication complexity required a common reference string.
2) A functional encryption combiner based on pseudorandom generators (PRGs) in NC^1. Such PRGs can be instantiated from assumptions such as DDH and LWE. Previous constructions of FE combiners were known only from the learning with errors assumption. Using this result, we build a universal construction of functional encryption: an explicit construction of functional encryption based only on the assumptions that functional encryption exists and PRGs in NC^1
Advances in Functional Encryption
Functional encryption is a novel paradigm for public-key encryption that enables both fine-grained access control and selective computation on encrypted data, as is necessary to protect big, complex data in the cloud. In this thesis, I provide a brief introduction to functional encryption, and an overview of my contributions to the area
- …