28 research outputs found
A New Approximate Min-Max Theorem with Applications in Cryptography
We propose a novel proof technique that can be applied to attack a broad
class of problems in computational complexity, when switching the order of
universal and existential quantifiers is helpful. Our approach combines the
standard min-max theorem and convex approximation techniques, offering
quantitative improvements over the standard way of using min-max theorems as
well as more concise and elegant proofs
Simulating Auxiliary Inputs, Revisited
For any pair of correlated random variables we can think of as a
randomized function of . Provided that is short, one can make this
function computationally efficient by allowing it to be only approximately
correct. In folklore this problem is known as \emph{simulating auxiliary
inputs}. This idea of simulating auxiliary information turns out to be a
powerful tool in computer science, finding applications in complexity theory,
cryptography, pseudorandomness and zero-knowledge. In this paper we revisit
this problem, achieving the following results:
\begin{enumerate}[(a)] We discuss and compare efficiency of known results,
finding the flaw in the best known bound claimed in the TCC'14 paper "How to
Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing
the simulator. Our technique essentially fixes the flaw. This boosting proof is
of independent interest, as it shows how to handle "negative mass" issues when
constructing probability measures in descent algorithms. Our bounds are much
better than bounds known so far. To make the simulator
-indistinguishable we need the complexity in time/circuit size, which is better by a
factor compared to previous bounds. In particular, with our
technique we (finally) get meaningful provable security for the EUROCRYPT'09
leakage-resilient stream cipher instantiated with a standard 256-bit block
cipher, like .Comment: Some typos present in the previous version have been correcte
Equidistribution of Heegner Points and Ternary Quadratic Forms
We prove new equidistribution results for Galois orbits of Heegner points
with respect to reduction maps at inert primes. The arguments are based on two
different techniques: primitive representations of integers by quadratic forms
and distribution relations for Heegner points. Our results generalize one of
the equidistribution theorems established by Cornut and Vatsal in the sense
that we allow both the fundamental discriminant and the conductor to grow.
Moreover, for fixed fundamental discriminant and variable conductor, we deduce
an effective surjectivity theorem for the reduction map from Heegner points to
supersingular points at a fixed inert prime. Our results are applicable to the
setting considered by Kolyvagin in the construction of the Heegner points Euler
system
Hardness of Computing Individual Bits for One-way Functions on Elliptic Curves
We prove that if one can predict any of the bits of the input to an elliptic curve based one-way function over a finite field, then we can invert the function. In particular, our result implies that if one can predict any of the bits of the input to a classical pairing-based one-way function with non-negligible advantage over a random guess then one can efficiently invert this function and thus, solve the Fixed Argument Pairing Inversion problem (FAPI-1/FAPI-2). The latter has implications on the security of various pairing-based schemes such as the identity-based encryption scheme of Boneh–Franklin, Hess’ identity-based signature scheme, as well as Joux’s three-party one-round key agreement protocol. Moreover, if one can solve FAPI-1 and FAPI-2 in polynomial time then one can solve the Computational Diffie--Hellman problem (CDH) in polynomial time. Our result implies that all the bits of the functions defined above are hard-to-compute assuming these functions are one-way. The argument is based on a list-decoding technique via discrete Fourier transforms due to Akavia--Goldwasser–Safra as well as an idea due to Boneh–Shparlinski
On the Complexity of Simulating Auxiliary Input
We construct a simulator for the simulating auxiliary input problem with
complexity better than all previous results and prove the optimality up to
logarithmic factors by establishing a black-box lower bound. Specifically, let
be the length of the auxiliary input and be the
indistinguishability parameter. Our simulator is
more complicated than the distinguisher family.
For the lower bound, we show the relative complexity to the distinguisher of a
simulator is at least assuming the simulator is
restricted to use the distinguishers in a black-box way and satisfy a mild
restriction
On the Bit Security of Elliptic Curve Diffie--Hellman
This paper gives the first bit security result for the elliptic curve Diffie--Hellman key exchange protocol for elliptic curves defined over prime fields. About of the most significant bits of the -coordinate of the Diffie--Hellman key are as hard to compute as the entire key. A similar result can be derived for the lower bits. The paper also generalizes and improves the result for elliptic curves over extension fields, that shows that computing one component (in the ground field) of the Diffie--Hellman key is as hard to compute as the entire key
Simultaneous Amplification: The Case of Non-Interactive Zero-Knowledge
In this work, we explore the question of simultaneous privacy and soundness amplification for non-interactive zero-knowledge argument
systems (NIZK). We show that any sound and zero-knowledge NIZK candidate satisfying , for any constant , can be turned into a computationally sound and zero-knowledge candidate with the only extra assumption of a subexponentially secure public-key encryption.
We develop novel techniques to leverage the use of leakage simulation lemma (Jetchev-Peitzrak TCC 2014) to argue amplification. A crucial component of our result is a new notion for secret sharing instances. We believe that this may be of independent interest.
To achieve this result we analyze following two transformations:
- Parallel Repetition: We show that using parallel repetition any sound and zero-knowledge NIZK candidate can be turned into (roughly) sound and zero-knowledge candidate. Here is the repetition parameter.
- MPC based Repetition: We propose a new transformation that amplifies zero-knowledge in the same way that parallel repetition amplifies soundness. We show that using this any sound and zero-knowledge NIZK candidate can be turned into (roughly) sound and zero-knowledge candidate.
Then we show that using these transformations in a zig-zag fashion we can obtain our result.
Finally, we also present a simple transformation which directly turns any NIZK candidate satisfying to a secure one
Optimal Collision Security in Double Block Length Hashing with Single Length Key
The idea of double block length hashing is to construct a compression function on 2n bits using a block cipher with an n-bit block size. All optimally secure double length hash functions known in the literature employ a cipher with a key space of double block size, 2n-bit. On the other hand, no optimally secure compression functions built from a cipher with an n-bit key space are known. Our work deals with this problem. Firstly, we prove that for a wide class of compression functions with two calls to its underlying n-bit keyed block cipher collisions can be found in about 2n/2 queries. This attack applies, among others, to functions where the output is derived from the block cipher outputs in a linear way. This observation demonstrates that all security results of designs using a cipher with 2n-bit key space crucially rely on the presence of these extra n key bits. The main contribution of this work is a proof that this issue can be resolved by allowing the compression function to make one extra call to the cipher. We propose a family of compression functions making three block cipher calls that asymptotically achieves optimal collision resistance up to 2n(1-ε) queries and preimage resistance up to 23n(1-ε)/2 queries, for any ε > 0. To our knowledge, this is the first optimally collision secure double block length construction using a block cipher with single length key space. © International Association for Cryptologic Research 2012.status: publishe
The Exact PRF-Security of NMAC and HMAC
NMAC is a mode of operation which turns a fixed input-length
keyed hash function f into a variable input-length function.
A~practical single-key variant of NMAC called HMAC is a very
popular and widely deployed message authentication code
(MAC). Security proofs and attacks for NMAC can typically
be lifted to HMAC.
NMAC was introduced by Bellare, Canetti and Krawczyk
[Crypto\u2796], who proved it to be a secure pseudorandom
function (PRF), and thus also a MAC, assuming that
(1) f is a PRF and
(2) the function we get when cascading f is weakly
collision-resistant.
Unfortunately, HMAC is typically instantiated with
cryptographic hash functions like MD5 or SHA-1 for which (2)
has been found to be wrong. To restore the provable
guarantees for NMAC, Bellare [Crypto\u2706] showed its
security based solely on the assumption that f is a PRF,
albeit via a non-uniform reduction.
Our first contribution is a simpler and uniform proof: If f
is an \eps-secure PRF (against q queries) and a
\delta-non-adaptively secure PRF (against q queries), then
NMAC^f is an (\eps+lq\delta)-secure PRF against q queries of
length at most l blocks each.
We then show that this \eps+lq\delta bound is basically
tight. For the most interesting case where lq\delta>=\eps
we prove this by constructing an f for which an attack with
advantage lq\delta exists. This also violates the bound
O(l\eps) on the PRF-security of NMAC recently claimed by
Koblitz and Menezes.
Finally, we analyze the PRF-security of a modification of
NMAC called NI [An and Bellare, Crypto\u2799] that differs
mainly by using a compression function with an additional
keying input. This avoids the constant rekeying on
multi-block messages in NMAC and allows for a security proof
starting by the standard switch from a PRF to a random
function, followed by an information-theoretic analysis. We
carry out such an analysis, obtaining a tight lq^2/2^c bound
for this step, improving over the trivial bound of
l^2q^2/2^c. The proof borrows combinatorial techniques
originally developed for proving the security of CBC-MAC
[Bellare et al., Crypto\u2705]. We also analyze a variant of
NI that does not include the message length in the last call
to the compression function, proving a l^{1+o(1)}q^2/2^c
bound in this case
Amplifying the Security of Functional Encryption, Unconditionally
Security amplification is a fundamental problem in cryptography. In this work, we study security amplification for functional encryption (FE). We show two main results:
1) For any constant epsilon in (0,1), we can amplify any FE scheme for P/poly which is epsilon-secure against all polynomial sized adversaries to a fully secure FE scheme for P/poly, unconditionally.
2) For any constant epsilon in (0,1), we can amplify any FE scheme for P/poly which is epsilon-secure against subexponential sized adversaries to a fully subexponentially secure FE scheme for P/poly, unconditionally.
Furthermore, both of our amplification results preserve compactness of the underlying FE scheme. Previously, amplification results for FE were only known assuming subexponentially secure LWE.
Along the way, we introduce a new form of homomorphic secret sharing called set homomorphic secret sharing that may be of independent interest. Additionally, we introduce a new technique, which allows one to argue security amplification of nested primitives, and prove a general theorem that can be used to analyze the security amplification of parallel repetitions