1,341 research outputs found
Partial-indistinguishability obfuscation using braids
An obfuscator is an algorithm that translates circuits into
functionally-equivalent similarly-sized circuits that are hard to understand.
Efficient obfuscators would have many applications in cryptography. Until
recently, theoretical progress has mainly been limited to no-go results. Recent
works have proposed the first efficient obfuscation algorithms for classical
logic circuits, based on a notion of indistinguishability against
polynomial-time adversaries. In this work, we propose a new notion of
obfuscation, which we call partial-indistinguishability. This notion is based
on computationally universal groups with efficiently computable normal forms,
and appears to be incomparable with existing definitions. We describe universal
gate sets for both classical and quantum computation, in which our definition
of obfuscation can be met by polynomial-time algorithms. We also discuss some
potential applications to testing quantum computers. We stress that the
cryptographic security of these obfuscators, especially when composed with
translation from other gate sets, remains an open question.Comment: 21 pages,Proceedings of TQC 201
Semantic Security and Indistinguishability in the Quantum World
At CRYPTO 2013, Boneh and Zhandry initiated the study of quantum-secure
encryption. They proposed first indistinguishability definitions for the
quantum world where the actual indistinguishability only holds for classical
messages, and they provide arguments why it might be hard to achieve a stronger
notion. In this work, we show that stronger notions are achievable, where the
indistinguishability holds for quantum superpositions of messages. We
investigate exhaustively the possibilities and subtle differences in defining
such a quantum indistinguishability notion for symmetric-key encryption
schemes. We justify our stronger definition by showing its equivalence to novel
quantum semantic-security notions that we introduce. Furthermore, we show that
our new security definitions cannot be achieved by a large class of ciphers --
those which are quasi-preserving the message length. On the other hand, we
provide a secure construction based on quantum-resistant pseudorandom
permutations; this construction can be used as a generic transformation for
turning a large class of encryption schemes into quantum indistinguishable and
hence quantum semantically secure ones. Moreover, our construction is the first
completely classical encryption scheme shown to be secure against an even
stronger notion of indistinguishability, which was previously known to be
achievable only by using quantum messages and arbitrary quantum encryption
circuits.Comment: 37 pages, 2 figure
Quantum Fully Homomorphic Encryption With Verification
Fully-homomorphic encryption (FHE) enables computation on encrypted data
while maintaining secrecy. Recent research has shown that such schemes exist
even for quantum computation. Given the numerous applications of classical FHE
(zero-knowledge proofs, secure two-party computation, obfuscation, etc.) it is
reasonable to hope that quantum FHE (or QFHE) will lead to many new results in
the quantum setting. However, a crucial ingredient in almost all applications
of FHE is circuit verification. Classically, verification is performed by
checking a transcript of the homomorphic computation. Quantumly, this strategy
is impossible due to no-cloning. This leads to an important open question: can
quantum computations be delegated and verified in a non-interactive manner? In
this work, we answer this question in the affirmative, by constructing a scheme
for QFHE with verification (vQFHE). Our scheme provides authenticated
encryption, and enables arbitrary polynomial-time quantum computations without
the need of interaction between client and server. Verification is almost
entirely classical; for computations that start and end with classical states,
it is completely classical. As a first application, we show how to construct
quantum one-time programs from classical one-time programs and vQFHE.Comment: 30 page
Unforgeable Quantum Encryption
We study the problem of encrypting and authenticating quantum data in the
presence of adversaries making adaptive chosen plaintext and chosen ciphertext
queries. Classically, security games use string copying and comparison to
detect adversarial cheating in such scenarios. Quantumly, this approach would
violate no-cloning. We develop new techniques to overcome this problem: we use
entanglement to detect cheating, and rely on recent results for characterizing
quantum encryption schemes. We give definitions for (i.) ciphertext
unforgeability , (ii.) indistinguishability under adaptive chosen-ciphertext
attack, and (iii.) authenticated encryption. The restriction of each definition
to the classical setting is at least as strong as the corresponding classical
notion: (i) implies INT-CTXT, (ii) implies IND-CCA2, and (iii) implies AE. All
of our new notions also imply QIND-CPA privacy. Combining one-time
authentication and classical pseudorandomness, we construct schemes for each of
these new quantum security notions, and provide several separation examples.
Along the way, we also give a new definition of one-time quantum authentication
which, unlike all previous approaches, authenticates ciphertexts rather than
plaintexts.Comment: 22+2 pages, 1 figure. v3: error in the definition of QIND-CCA2 fixed,
some proofs related to QIND-CCA2 clarifie
Classical Cryptographic Protocols in a Quantum World
Cryptographic protocols, such as protocols for secure function evaluation
(SFE), have played a crucial role in the development of modern cryptography.
The extensive theory of these protocols, however, deals almost exclusively with
classical attackers. If we accept that quantum information processing is the
most realistic model of physically feasible computation, then we must ask: what
classical protocols remain secure against quantum attackers?
Our main contribution is showing the existence of classical two-party
protocols for the secure evaluation of any polynomial-time function under
reasonable computational assumptions (for example, it suffices that the
learning with errors problem be hard for quantum polynomial time). Our result
shows that the basic two-party feasibility picture from classical cryptography
remains unchanged in a quantum world.Comment: Full version of an old paper in Crypto'11. Invited to IJQI. This is
authors' copy with different formattin
The Power of Natural Properties as Oracles
We study the power of randomized complexity classes that are given oracle access to a natural property of Razborov and Rudich (JCSS, 1997) or its special case, the Minimal Circuit Size Problem (MCSP). We show that in a number of complexity-theoretic results that use the SAT oracle, one can use the MCSP oracle instead. For example, we show that ZPEXP^{MCSP} !subseteq P/poly, which should be contrasted with the previously known circuit lower bound ZPEXP^{NP} !subseteq P/poly. We also show that, assuming the existence of Indistinguishability Obfuscators (IO), SAT and MCSP are equivalent in the sense that one has a ZPP algorithm if and only the other one does. We interpret our results as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions
Foundations and applications of program obfuscation
Code is said to be obfuscated if it is intentionally difficult for humans to understand.
Obfuscating a program conceals its sensitive implementation details and
protects it from reverse engineering and hacking. Beyond software protection, obfuscation
is also a powerful cryptographic tool, enabling a variety of advanced applications.
Ideally, an obfuscated program would hide any information about the original
program that cannot be obtained by simply executing it. However, Barak et al.
[CRYPTO 01] proved that for some programs, such ideal obfuscation is impossible.
Nevertheless, Garg et al. [FOCS 13] recently suggested a candidate general-purpose
obfuscator which is conjectured to satisfy a weaker notion of security called indistinguishability
obfuscation.
In this thesis, we study the feasibility and applicability of secure obfuscation:
- What notions of secure obfuscation are possible and under what assumptions?
- How useful are weak notions like indistinguishability obfuscation?
Our first result shows that the applications of indistinguishability obfuscation go
well beyond cryptography. We study the tractability of computing a Nash equilibrium
vii
of a game { a central problem in algorithmic game theory and complexity theory.
Based on indistinguishability obfuscation, we construct explicit games where a Nash
equilibrium cannot be found efficiently.
We also prove the following results on the feasibility of obfuscation. Our starting
point is the Garg at el. obfuscator that is based on a new algebraic encoding scheme
known as multilinear maps [Garg et al. EUROCRYPT 13].
1. Building on the work of Brakerski and Rothblum [TCC 14], we provide the first
rigorous security analysis for obfuscation. We give a variant of the Garg at el.
obfuscator and reduce its security to that of the multilinear maps. Specifically,
modeling the multilinear encodings as ideal boxes with perfect security, we prove
ideal security for our obfuscator. Our reduction shows that the obfuscator resists
all generic attacks that only use the encodings' permitted interface and do not
exploit their algebraic representation.
2. Going beyond generic attacks, we study the notion of virtual-gray-box obfusca-
tion [Bitansky et al. CRYPTO 10]. This relaxation of ideal security is stronger
than indistinguishability obfuscation and has several important applications
such as obfuscating password protected programs. We formulate a security
requirement for multilinear maps which is sufficient, as well as necessary for
virtual-gray-box obfuscation.
3. Motivated by the question of basing obfuscation on ideal objects that are simpler
than multilinear maps, we give a negative result showing that ideal obfuscation
is impossible, even in the random oracle model, where the obfuscator is given access
to an ideal random function. This is the first negative result for obfuscation
in a non-trivial idealized model
Hardness vs. (Very Little) Structure in Cryptography: A Multi-Prover Interactive Proofs Perspective
The hardness of highly-structured computational problems gives rise to a variety of public-key primitives. On one hand, the structure exhibited by such problems underlies the basic functionality of public-key primitives, but on the other hand it may endanger public-key cryptography in its entirety via potential algorithmic advances. This subtle interplay initiated a fundamental line of research on whether structure is inherently necessary for cryptography, starting with Rudich\u27s early work (PhD Thesis \u2788) and recently leading to that of Bitansky, Degwekar and Vaikuntanathan (CRYPTO \u2717).
Identifying the structure of computational problems with their corresponding complexity classes, Bitansky et al. proved that a variety of public-key primitives (e.g., public-key encryption, oblivious transfer and even functional encryption) cannot be used in a black-box manner to construct either any hard language that has -verifiers both for the language itself and for its complement, or any hard language (and even promise problem) that has a statistical zero-knowledge proof system -- corresponding to hardness in the structured classes or , respectively, from a black-box perspective.
In this work we prove that the same variety of public-key primitives do not inherently require even very little structure in a black-box manner: We prove that they do not imply any hard language that has multi-prover interactive proof systems both for the language and for its complement -- corresponding to hardness in the class from a black-box perspective. Conceptually, given that , our result rules out languages with very little structure. Additionally, we prove a similar result for collision-resistant hash functions, and more generally for any cryptographic primitive that exists relative to a random oracle.
Already the cases of languages that have or proof systems both for the language itself and for its complement, which we rule out as immediate corollaries, lead to intriguing insights. For the case of , where our result can be circumvented using non-black-box techniques, we reveal a gap between black-box and non-black-box techniques. For the case of , where circumventing our result via non-black-box techniques would be a major development, we both strengthen and unify the proofs of Bitansky et al. for languages that have -verifiers both for the language itself and for its complement and for languages that have a statistical zero-knowledge proof system
Indistinguishability Obfuscation from Well-Founded Assumptions
In this work, we show how to construct indistinguishability obfuscation from
subexponential hardness of four well-founded assumptions. We prove:
Let be arbitrary
constants. Assume sub-exponential security of the following assumptions, where
is a security parameter, and the parameters below are
large enough polynomials in :
- The SXDH assumption on asymmetric bilinear groups of a prime order ,
- The LWE assumption over with subexponential
modulus-to-noise ratio , where is the dimension of the LWE
secret,
- The LPN assumption over with polynomially many LPN samples
and error rate , where is the dimension of the LPN
secret,
- The existence of a Boolean PRG in with stretch
,
Then, (subexponentially secure) indistinguishability obfuscation for all
polynomial-size circuits exists
- …