20 research outputs found
Why Fiat-Shamir for Proofs Lacks a Proof
The Fiat-Shamir heuristic (CRYPTO \u2786) is used to convert any 3-message public-coin proof or argument system into a non-interactive argument, by hashing the prover\u27s first message to select the verifier\u27s challenge. It is known that this heuristic is sound when the hash function is modeled as a random oracle. On the other hand, the surprising result of Goldwasser and Kalai (FOCS \u2703) shows that there exists a computationally sound argument on which the Fiat-Shamir heuristic is never sound, when instantiated with any actual efficient hash function.
This leaves us with the following interesting possibility: perhaps there exists a hash function that securely instantiates the Fiat-Shamir heuristic for all 3-message public-coin statistically sound proofs, even if it can fail for some computationally sound arguments. Indeed, the existence of such hash functions has been conjectured by Barak, Lindell and Vadhan (FOCS \u2703), who also gave a seemingly reasonable and sufficient condition under which such hash functions exist. However, we do not have any provably secure construction of such hash functions, under any standard assumption such as the hardness of DDH, RSA, QR, LWE, etc.
In this work we give a broad black-box separation result, showing that the security of such hash functions cannot be proved under virtually any standard cryptographic assumption via a black-box reduction
On Provable White-Box Security in the Strong Incompressibility Model
Incompressibility is a popular security notion for white-box cryptography and captures that a large encryption program cannot be compressed without losing functionality. Fouque, Karpman, Kirchner and Minaud (FKKM) defined strong incompressibility, where a compressed program should not even help to distinguish encryptions of two messages of equal length. Equivalently, the notion can be phrased as indistinguishability under chosen-plaintext attacks and key-leakage (LK-IND-CPA), where the leakage rate is high.
In this paper, we show that LK-IND-CPA security with superlogarithmic-length leakage, and thus strong incompressibility, cannot be proven under standard (i.e. single-stage) assumptions, if the encryption scheme is key-fixing, i.e. a polynomial number of message-ciphertext pairs uniquely determine the key with high probability.
Our impossibility result refutes a claim by FKKM that their big-key generation mechanism achieves strong incompressibility when combined with any PRG or any conventional encryption scheme, since the claim is not true for encryption schemes which are key-fixing (or for PRGs which are injective). In particular, we prove that the cipher block chaining (CBC) block cipher mode is key-fixing when modelling the cipher as a truly random permutation for each key. Subsequent to and inspired by our work, FKKM prove that their original big-key generation mechanism can be combined with a random oracle into an LK-IND-CPA-secure encryption scheme, circumventing the impossibility result by the use of an idealised model.
Along the way, our work also helps clarifying the relations between incompressible white-box cryptography, big-key symmetric encryption, and general leakage resilient cryptography, and their limitations
Unprovability of Leakage-Resilient Cryptography Beyond the Information-Theoretic Limit
In recent years, leakage-resilient cryptography---the design of cryptographic protocols resilient to bounded leakage of honest players\u27 secrets---has received significant attention. A major limitation of known provably-secure constructions (based on polynomial hardness assumptions) is that they require the secrets to have sufficient actual (i.e., information-theoretic), as opposed to computational, min-entropy even after the leakage.
In this work, we present barriers to provably-secure constructions beyond the ``information-theoretic barrier\u27\u27: Assume the existence of collision-resistant hash functions. Then, no NP search problem with -bounded number of witnesses can be proven (even worst-case) hard in the presence of bits of computationally-efficient leakage of the witness, using a black-box reduction to any -round assumption. In particular, this implies that -leakage resilient injective one-way functions, and more generally, one-way functions with at most pre-images, cannot be based on any ``standard\u27\u27 complexity assumption using a black-box reduction
Unprovable Security of 2-Message Zero Knowledge
Goldreich and Oren (JoC\u2794) show that only languages in BPP have 2-message zero-knowledge arguments. In this paper we consider weaker, super-polynomial simulation (SPS), notions of
zero-knowledge. We present barriers to using black-box reductions for demonstrating soundness of 2-message protocols with efficient prover strategies satisfying SPS zero-knowledge. More precisely, if -hard one-way functions exist for a super-polynomial , the following holds about 2-message efficient prover arguments over statements of length .
1. Black-box reductions cannot prove soundness of 2-message -simulatable arguments based on any polynomial-time intractability assumption, unless the assumption can be broken in polynomial time. This complements known 2-message quasi-polynomial-time simulatable arguments using a quasi-polynomial-time reduction (Pass\u2703), and 2-message exponential-time simulatable proofs using a polynomial-time reduction (Dwork-Naor\u2700, Pass\u2703).
2. Back-box reductions cannot prove soundness of 2-message strong -simulatable arguments, even if the reduction and the challenger both can run in -time, unless the assumption can be broken in time. Strong -simulatability means that the output of the simulator is indistinguishable also for -size circuits, with a indistinguishability gap. This complements known 3-message strong quasi-polynomial-time simulatable proofs (Blum\u2786, Canetti et~al\u27~00), or 2-message quasi-polynomial-time simulatable arguments (Khurana-Sahai\u2717, Kalai-Khurana-Sahai\u2718) satisfying a relaxed notion of strong simulation where the distinguisher\u27s size can be large, but the distinguishing gap is negligible in
Recommended from our members
On ELFs, Deterministic Encryption, and Correlated-Input Security
We construct deterministic public key encryption secure for any constant number of arbitrarily correlated computationally unpredictable messages. Prior works required either random oracles or non-standard knowledge assumptions. In contrast, our constructions are based on the exponential hardness of DDH, which is plausible in elliptic curve groups. Our central tool is a new trapdoored extremely lossy function, which modifies extremely lossy functions by adding a trapdoor
Memory Lower Bounds of Reductions Revisited
In Crypto 2017, Auerbach et al. initiated the study on memory-tight reductions and proved two negative results on the memory-tightness of restricted black-box reductions from multi-challenge security to single-challenge security for signatures and an artificial hash function. In this paper, we revisit the results by Auerbach et al. and show that for a large class of reductions treating multi-challenge security, it is impossible to avoid loss of memory-tightness unless we sacrifice the efficiency of their running-time. Specifically, we show three lower bound results. Firstly, we show a memory lower bound of natural black-box reductions from the multi-challenge unforgeability of unique signatures to any computational assumption. Then we show a lower bound of restricted reductions from multi-challenge security to single-challenge security for a wide class of cryptographic primitives with unique keys in the multi-user setting. Finally, we extend the lower bound result shown by Auerbach et al. treating a hash function to one treating any hash function with a large domain
Fiat-Shamir for Proofs Lacks a Proof Even in the Presence of Shared Entanglement
We explore the cryptographic power of arbitrary shared physical resources.
The most general such resource is access to a fresh entangled quantum state at
the outset of each protocol execution. We call this the Common Reference
Quantum State (CRQS) model, in analogy to the well-known Common Reference
String (CRS). The CRQS model is a natural generalization of the CRS model but
appears to be more powerful: in the two-party setting, a CRQS can sometimes
exhibit properties associated with a Random Oracle queried once by measuring a
maximally entangled state in one of many mutually unbiased bases. We formalize
this notion as a Weak One-Time Random Oracle (WOTRO), where we only ask of the
m-bit output to have some randomness when conditioned on the n-bit input.
We show that WOTRO with is black-box impossible
in the CRQS model, meaning that no protocol can have its security black-box
reduced to a cryptographic game. We define a (inefficient) quantum adversary
against any WOTRO protocol that can be efficiently simulated in polynomial
time, ruling out any reduction to a secure game that only makes black-box
queries to the adversary. On the other hand, we introduce a non-game quantum
assumption for hash functions that implies WOTRO in the CRQ\m = n$, then hash the output.
The impossibility of WOTRO has the following consequences. First, we show the
black-box impossibility of a quantum Fiat-Shamir transform, extending the
impossibility result of Bitansky et al. (TCC '13) to the CRQS model. Second, we
show a black-box impossibility result for a strenghtened version of quantum
lightning (Zhandry, Eurocrypt '19) where quantum bolts have an additional
parameter that cannot be changed without generating new bolts.Comment: 54 pages, 2 figure
Instantiating Random Oracles via UCEs
This paper provides a (standard-model) notion of security for (keyed)
hash functions, called UCE, that we show enables instantiation of
random oracles (ROs) in a fairly broad and systematic way. Goals and
schemes we consider include deterministic PKE, message-locked
encryption, hardcore functions, point-function obfuscation, OAEP,
encryption secure for key-dependent messages, encryption secure under
related-key attack, proofs of storage and adaptively-secure garbled
circuits with short tokens. We can take existing, natural and
efficient ROM schemes and show that the instantiated scheme resulting
from replacing the RO with a UCE function is secure in the standard
model. In several cases this results in the first standard-model
schemes for these goals. The definition of UCE-security itself asks
that outputs of the function look random given some ``leakage,\u27\u27 even
if the adversary knows the key, as long as the leakage is
appropriately restricted
Augmented Random Oracles
We propose a new paradigm for justifying the security of random oracle-based protocols, which we call the Augmented Random Oracle Model (AROM). We show that the AROM captures a wide range of important random oracle impossibility results. Thus a proof in the AROM implies some resiliency to such impossibilities. We then consider three ROM transforms which are subject to impossibilities: Fiat-Shamir (FS), Fujisaki-Okamoto (FO), and Encrypt-with-Hash (EwH). We show in each case how to obtain security in the AROM by strengthening the building blocks or modifying the transform.
Along the way, we give a couple other results. We improve the assumptions needed for the FO and EwH impossibilities from indistinguishability obfuscation to circularly secure LWE; we argue that our AROM still captures this improved impossibility. We also demonstrate that there is no best possible hash function, by giving a pair of security properties, both of which can be instantiated in the standard model separately, which cannot be simultaneously satisfied by a single hash function
Impossibility on Tamper-Resilient Cryptography with Uniqueness Properties
In this work, we show negative results on the tamper-resilience of a wide class of cryptographic primitives with uniqueness properties, such as unique signatures, verifiable random functions, signatures with unique keys, injective one-way functions, and encryption schemes with a property we call unique-message property. Concretely, we prove that for these primitives, it is impossible to derive their (even extremely weak) tamper-resilience from any common assumption, via black-box reductions. Our proofs exploit the simulatable attack paradigm proposed by Wichs (ITCS ’13), and the tampering model we treat is the plain model, where there is no trusted setup