274 research outputs found
Structure vs Hardness through the Obfuscation Lens
Much of modern cryptography, starting from public-key encryption and going beyond, is based on the hardness of structured (mostly algebraic) problems like factoring, discrete log, or finding short lattice vectors. While structure is perhaps what enables advanced applications, it also puts the hardness of these problems in question. In particular, this structure often puts them in low (and so called structured) complexity classes such as NPcoNP or statistical zero-knowledge (SZK).
Is this structure really necessary? For some cryptographic primitives, such as one-way permutations and homomorphic encryption, we know that the answer is yes — they imply hard problems in NPcoNP and SZK, respectively. In contrast, one-way functions do not imply such hard problems, at least not by black-box reductions. Yet, for many basic primitives such as public-key encryption, oblivious transfer, and functional encryption, we do not have any answer.
We show that the above primitives, and many others, do not imply hard problems in NPcoNP or SZK via black-box reductions. In fact, we first show that even the very powerful notion of Indistinguishability Obfuscation (IO) does not imply such hard problems, and then deduce the same for a large class of primitives that can be constructed from IO
Foundations and applications of program obfuscation
Code is said to be obfuscated if it is intentionally difficult for humans to understand.
Obfuscating a program conceals its sensitive implementation details and
protects it from reverse engineering and hacking. Beyond software protection, obfuscation
is also a powerful cryptographic tool, enabling a variety of advanced applications.
Ideally, an obfuscated program would hide any information about the original
program that cannot be obtained by simply executing it. However, Barak et al.
[CRYPTO 01] proved that for some programs, such ideal obfuscation is impossible.
Nevertheless, Garg et al. [FOCS 13] recently suggested a candidate general-purpose
obfuscator which is conjectured to satisfy a weaker notion of security called indistinguishability
obfuscation.
In this thesis, we study the feasibility and applicability of secure obfuscation:
- What notions of secure obfuscation are possible and under what assumptions?
- How useful are weak notions like indistinguishability obfuscation?
Our first result shows that the applications of indistinguishability obfuscation go
well beyond cryptography. We study the tractability of computing a Nash equilibrium
vii
of a game { a central problem in algorithmic game theory and complexity theory.
Based on indistinguishability obfuscation, we construct explicit games where a Nash
equilibrium cannot be found efficiently.
We also prove the following results on the feasibility of obfuscation. Our starting
point is the Garg at el. obfuscator that is based on a new algebraic encoding scheme
known as multilinear maps [Garg et al. EUROCRYPT 13].
1. Building on the work of Brakerski and Rothblum [TCC 14], we provide the first
rigorous security analysis for obfuscation. We give a variant of the Garg at el.
obfuscator and reduce its security to that of the multilinear maps. Specifically,
modeling the multilinear encodings as ideal boxes with perfect security, we prove
ideal security for our obfuscator. Our reduction shows that the obfuscator resists
all generic attacks that only use the encodings' permitted interface and do not
exploit their algebraic representation.
2. Going beyond generic attacks, we study the notion of virtual-gray-box obfusca-
tion [Bitansky et al. CRYPTO 10]. This relaxation of ideal security is stronger
than indistinguishability obfuscation and has several important applications
such as obfuscating password protected programs. We formulate a security
requirement for multilinear maps which is sufficient, as well as necessary for
virtual-gray-box obfuscation.
3. Motivated by the question of basing obfuscation on ideal objects that are simpler
than multilinear maps, we give a negative result showing that ideal obfuscation
is impossible, even in the random oracle model, where the obfuscator is given access
to an ideal random function. This is the first negative result for obfuscation
in a non-trivial idealized model
On Distributional Collision Resistant Hashing
Collision resistant hashing is a fundamental concept that is the basis for many of the important cryptographic primitives and protocols. Collision resistant hashing is a family of compressing functions such that no efficient adversary can find any collision given a random function in the family.
In this work we study a relaxation of collision resistance called distributional collision resistance, introduced by Dubrov and Ishai (STOC \u2706). This relaxation of collision resistance only guarantees that no efficient adversary, given a random function in the family, can sample a pair where is uniformly random and is uniformly random conditioned on colliding with .
Our first result shows that distributional collision resistance can be based on the existence of multi-collision resistance hash (with no additional assumptions). Multi-collision resistance is another relaxation of collision resistance which guarantees that an efficient adversary cannot find any tuple of inputs that collide relative to a random function in the family. The construction is non-explicit, non-black-box, and yields an infinitely-often secure family. This partially resolves a question of Berman et al. (EUROCRYPT \u2718). We further observe that in a black-box model such an implication (from multi-collision resistance to distributional collision resistance) does not exist.
Our second result is a construction of a distributional collision resistant hash from the average-case hardness of SZK. Previously, this assumption was not known to imply any form of collision resistance (other than the ones implied by one-way functions)
An Alternative View of the Graph-Induced Multilinear Maps
In this paper, we view multilinear maps through the lens of ``homomorphic obfuscation . In specific, we show how to homomorphically obfuscate the kernel-test and affine subspace-test functionalities of high dimensional matrices. Namely, the evaluator is able to perform additions and multiplications over the obfuscated matrices, and test subspace memberships on the resulting code. The homomorphic operations are constrained by the prescribed data structure, e.g. a tree or a graph, where the matrices are stored. The security properties of all the constructions are based on the hardness of Learning with errors problem (LWE). The technical heart is to ``control the ``chain reactions\u27\u27 over a sequence of LWE instances.
Viewing the homomorphic obfuscation scheme from a different angle, it coincides with the graph-induced multilinear maps proposed by Gentry, Gorbunov and Halevi (GGH15). Our proof technique recognizes several ``safe modes of GGH15 that are not known before, including a simple special case: if the graph is acyclic and the matrices are sampled independently from binary or error distributions, then the encodings of the matrices are pseudorandom
A Note on Black-Box Separations for Indistinguishability Obfuscation
Mahmoody et al. (TCC 2016-A) showed that basing indistinguishability obfuscation (IO) on a wide range of primitives in a black-box way is \emph{as hard as} basing public-key cryptography on one-way functions. The list included any primitive that could be realized relative to random trapdoor permutation or degree- graded encoding oracle models in a secure way against computationally unbounded polynomial-query attackers.
In this work, relying on the recent result of Brakerski, Brzuska, and Fleischhacker (ePrint 2016/226) in which they ruled out statistically secure approximately correct IO, we show that there is no fully black-box constructions of IO from any of the primitives listed above, assuming the existence of one-way functions and .
At a technical level, we provide an alternative lemma to the Borel-Cantelli lemma that is useful for deriving black-box separations. In particular, using this lemma we show that attacks in idealized models that succeed with only a \emph{constant} advantage over the trivial bound are indeed sufficient for deriving fully black-box separations from primitives that exist in such idealized models unconditionally
On Foundations of Protecting Computations
Information technology systems have become indispensable to uphold our
way of living, our economy and our safety. Failure of these systems can have
devastating effects. Consequently, securing these systems against malicious
intentions deserves our utmost attention.
Cryptography provides the necessary foundations for that purpose. In
particular, it provides a set of building blocks which allow to secure larger
information systems. Furthermore, cryptography develops concepts and tech-
niques towards realizing these building blocks. The protection of computations
is one invaluable concept for cryptography which paves the way towards
realizing a multitude of cryptographic tools. In this thesis, we contribute to
this concept of protecting computations in several ways.
Protecting computations of probabilistic programs. An indis-
tinguishability obfuscator (IO) compiles (deterministic) code such that it
becomes provably unintelligible. This can be viewed as the ultimate way
to protect (deterministic) computations. Due to very recent research, such
obfuscators enjoy plausible candidate constructions.
In certain settings, however, it is necessary to protect probabilistic com-
putations. The only known construction of an obfuscator for probabilistic
programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and
requires an indistinguishability obfuscator which satisfies extreme security
guarantees. We improve this construction and thereby reduce the require-
ments on the security of the underlying indistinguishability obfuscator.
(Agrikola, Couteau, and Hofheinz, PKC, 2020)
Protecting computations in cryptographic groups. To facilitate
the analysis of building blocks which are based on cryptographic groups,
these groups are often overidealized such that computations in the group
are protected from the outside. Using such overidealizations allows to prove
building blocks secure which are sometimes beyond the reach of standard
model techniques. However, these overidealizations are subject to certain
impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018
introduced the algebraic group model (AGM) as a relaxation which is closer
to the standard model but in several aspects preserves the power of said
overidealizations. However, their model still suffers from implausibilities.
We develop a framework which allows to transport several security proofs
from the AGM into the standard model, thereby evading the above implausi-
bility results, and instantiate this framework using an indistinguishability
obfuscator.
(Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020)
Protecting computations using compression. Perfect compression
algorithms admit the property that the compressed distribution is truly
random leaving no room for any further compression. This property is
invaluable for several cryptographic applications such as “honey encryption”
or password-authenticated key exchange. However, perfect compression
algorithms only exist for a very small number of distributions. We relax the
notion of compression and rigorously study the resulting notion which we
call “pseudorandom encodings”. As a result, we identify various surprising
connections between seemingly unrelated areas of cryptography. Particularly,
we derive novel results for adaptively secure multi-party computation which
allows for protecting computations in distributed settings. Furthermore, we
instantiate the weakest version of pseudorandom encodings which suffices
for adaptively secure multi-party computation using an indistinguishability
obfuscator.
(Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020
Recommended from our members
Cryptographic approaches to security and optimization in machine learning
Modern machine learning techniques have achieved surprisingly good standard test accuracy, yet classical machine learning theory has been unable to explain the underlying reason behind this success. The phenomenon of adversarial examples further complicates our understanding of what it means to have good generalization ability. Classifiers that generalize well to the test set are easily fooled by imperceptible image modifications, which can often be computed without knowledge of the classifier itself. The adversarial error of a classifier measures the error under which each test data point can be modified by an algorithm before it is given as input to the classifier. Followup work has showed that a tradeoff exists between optimizing for standard generalization error versus for adversarial error. This calls into question whether standard generalization error is the correct metric to measure.
We try to understand the generalization capability of modern machine learning techniques through the lens of adversarial examples. To reconcile the apparent tradeoff between the two competing notions of error, we create new security definitions and classifier constructions which allow us to prove an upper bound on the adversarial error that decreases as standard test error decreases. We introduce a cryptographic proof technique by defining a security assumption in a simpler attack setting and proving a security reduction from a restricted black-box attack problem to this security assumption. We then investigate the double descent curve in the interpolation regime, where test error can continue to decrease even after training error has reached zero, to give a natural explanation for the observed tradeoff between adversarial error and standard generalization error.
The second part of our work investigates further this notion of a black-box model by looking at the separation between being able to evaluate a function and being able to actually understand it. This is formalized through the notion of function obfuscation in cryptography. Given some concrete implementation of a function, the implementation is considered obfuscated if a user cannot produce the function output on a test input without querying the implementation itself. This means that a user cannot actually learn or understand the function even though all of the implementation details are presented in the clear. As expected this is a very strong requirement that does not exist for all functions one might be interested in. In our work we make progress on providing obfuscation schemes for simple, explicit function classes.
The last part of our work investigates non-statistical biases and algorithms for nonconvex optimization problems. We show that the continuous-time limit of stochastic gradient descent does not converge directly to the local optimum, but rather has a bias term which grows with the step size. We also construct novel, non-statistical algorithms for two parametric learning problems by employing lattice basis reduction techniques from cryptography
Indistinguishability Obfuscation from Simple-to-State Hard Problems: New Assumptions, New Techniques, and Simplification
In this work, we study the question of what set of simple-to-state assumptions suffice for constructing functional encryption and indistinguishability obfuscation (), supporting all functions describable by polynomial-size circuits. Our work improves over the state-of-the-art work of Jain, Lin, Matt, and Sahai (Eurocrypt 2019) in multiple dimensions.
New Assumption: Previous to our work, all constructions of from simple assumptions required novel pseudorandomness generators involving LWE samples and constant-degree polynomials over the integers, evaluated on the error of the LWE samples. In contrast, Boolean pseudorandom generators (PRGs) computable by constant-degree polynomials have been extensively studied since the work of Goldreich (2000). We show how to replace the novel pseudorandom objects over the integers used in previous works, with appropriate Boolean pseudorandom generators with sufficient stretch, when combined with LWE with binary error over suitable parameters. Both binary error LWE and constant degree Goldreich PRGs have been a subject of extensive cryptanalysis since much before our work and thus we back the plausibility of our assumption with security against algorithms studied in context of cryptanalysis of these objects.
New Techniques: We introduce a number of new techniques:
- We show how to build partially-hiding \emph{public-key} functional encryption, supporting degree-2 functions in the secret part of the message, and arithmetic functions over the public part of the message, assuming only standard assumptions over asymmetric pairing groups.
- We construct single-ciphertext and single-secret-key functional encryption for all circuits with long outputs, which has the features of {\em linear} key generation and compact ciphertext, assuming only the LWE assumption.
Simplification: Unlike prior works, our new techniques furthermore let us construct {\em public-key} functional encryption for polynomial-sized circuits directly (without invoking any bootstrapping theorem, nor transformation from secret-key to public key FE), and based only on the {\em polynomial hardness} of underlying assumptions. The functional encryption scheme satisfies a strong notion of efficiency where the size of the ciphertext is independent of the size of the circuit to be computed, and grows only sublinearly in the output size of the circuit and polynomially in the input size and the depth of the circuit. Finally, assuming that the underlying assumptions are subexponentially hard, we can bootstrap this construction to achieve
On the Complexity of Collision Resistant Hash Functions: New and Old Black-Box Separations
The complexity of collision-resistant hash functions has been long studied in the theory of cryptography. While we often think about them as a Minicrypt primitive, black-box separations demonstrate that constructions from one-way functions are unlikely. Indeed, theoretical constructions of collision-resistant hash functions are based on rather structured assumptions.
We make two contributions to this study:
1. A New Separation: We show that collision-resistant hashing does not imply hard problems in the class Statistical Zero Knowledge in a black-box way.
2. New Proofs: We show new proofs for the results of Simon, ruling out black-box reductions of collision-resistant hashing to one-way permutations, and of Asharov and Segev, ruling out black-box reductions to indistinguishability obfuscation. The new proofs are quite different from the previous ones and are based on simple coupling arguments
Cryptographic Hashing From Strong One-Way Functions
Constructing collision-resistant hash families (CRHFs) from one-way functions is a long-standing open problem and source of frustration in theoretical cryptography. In fact, there are strong negative results: black-box separations from one-way functions that are -secure against polynomial time adversaries (Simon, EUROCRYPT \u2798) and even from indistinguishability obfuscation (Asharov and Segev, FOCS \u2715).
In this work, we formulate a mild strengthening of exponentially secure one-way functions, and we construct CRHFs from such functions. Specifically, our security notion requires that every polynomial time algorithm has at most probability of inverting two independent challenges.
More generally, we consider the problem of simultaneously inverting functions , which we say constitute a ``one-way product function\u27\u27 (OWPF). We show that sufficiently hard OWPFs yield hash families that are multi-input correlation intractable (Canetti, Goldreich, and Halevi, STOC \u2798) with respect to all sparse (bounded arity) output relations. Additionally assuming indistinguishability obfuscation, we construct hash families that achieve a broader notion of correlation intractability, extending the recent work of Kalai, Rothblum, and Rothblum (CRYPTO \u2717). In particular, these families are sufficient to instantiate the Fiat-Shamir heuristic in the plain model for a natural class of interactive proofs.
An interesting consequence of our results is a potential new avenue for bypassing black-box separations. In particular, proving (with necessarily non-black-box techniques) that parallel repetition amplifies the hardness of specific one-way functions -- for example, all one-way permutations -- suffices to directly bypass Simon\u27s impossibility result
- …