18 research outputs found
Efficient Algorithms for Certifying Lower Bounds on the Discrepancy of Random Matrices
We initiate the study of the algorithmic problem of certifying lower bounds
on the discrepancy of random matrices: given an input matrix , output a value that is a lower bound on
for every , but
is close to the typical value of with high probability over
the choice of a random . This problem is important because of its
connections to conjecturally-hard average-case problems such as
negatively-spiked PCA, the number-balancing problem and refuting random
constraint satisfaction problems. We give the first polynomial-time algorithms
with non-trivial guarantees for two main settings. First, when the entries of
are i.i.d. standard Gaussians, it is known that with high probability. Our algorithm certifies that
with high probability. As an
application, this formally refutes a conjecture of Bandeira, Kunisky, and Wein
on the computational hardness of the detection problem in the negatively-spiked
Wishart model. Second, we consider the integer partitioning problem: given
uniformly random -bit integers , certify the non-existence
of a perfect partition, i.e. certify that for . Under the scaling , it is known that the
probability of the existence of a perfect partition undergoes a phase
transition from 1 to 0 at ; our algorithm certifies the
non-existence of perfect partitions for some . We also give
efficient non-deterministic algorithms with significantly improved guarantees.
Our algorithms involve a reduction to the Shortest Vector Problem.Comment: ITCS 202
Pseudorandom Generator Based on Hard Lattice Problem
This paper studies how to construct a pseudorandom generator using hard lattice problems.
We use a variation of the classical hard problem \emph{Inhomogeneous Small Integer Solution} ISIS of lattice, say \emph{Inhomogeneous Subset Sum Solution} ISSS. ISSS itself is a hash function. Proving the preimage sizes ISSS hash function images are almost the same, we construct a pseudorandom generator using the method in \cite{GKL93}. Also, we construct a pseudoentropy generator using the method in \cite{HILL99}. Most theoretical PRG constructions are not feasible in fact as they require rather long random bits as seeds. Our PRG construction only requires seed length to be which is feasible practically
Hardness of the (Approximate) Shortest Vector Problem: A Simple Proof via Reed-Solomon Codes
We give a
simple proof that the (approximate, decisional) Shortest Vector Problem is
\NP-hard under a randomized reduction. Specifically, we show that for any and any constant , the -approximate problem
in the norm (-\GapSVP_p) is not in unless \NP
\subseteq \mathsf{RP}. Our proof follows an approach pioneered by Ajtai (STOC
1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing
hardness of -\GapSVP_p using locally dense lattices. We construct
such lattices simply by applying "Construction A" to Reed-Solomon codes with
suitable parameters, and prove their local density via an elementary argument
originally used in the context of Craig lattices.
As in all known \NP-hardness results for \GapSVP_p with , our
reduction uses randomness. Indeed, it is a notorious open problem to prove
\NP-hardness via a deterministic reduction. To this end, we additionally
discuss potential directions and associated challenges for derandomizing our
reduction. In particular, we show that a close deterministic analogue of our
local density construction would improve on the state-of-the-art explicit
Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and
IEEE Trans. Inf. Theory 2006).
As a related contribution of independent interest, we also give a
polynomial-time algorithm for decoding -dimensional "Construction A
Reed-Solomon lattices" (with different parameters than those used in our
hardness proof) to a distance within an factor of
Minkowski's bound. This asymptotically matches the best known distance for
decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf.
Theory 2022), whose work we build on with a somewhat simpler construction and
analysis
PPP-Completeness with Connections to Cryptography
Polynomial Pigeonhole Principle (PPP) is an important subclass of TFNP with
profound connections to the complexity of the fundamental cryptographic
primitives: collision-resistant hash functions and one-way permutations. In
contrast to most of the other subclasses of TFNP, no complete problem is known
for PPP. Our work identifies the first PPP-complete problem without any circuit
or Turing Machine given explicitly in the input, and thus we answer a
longstanding open question from [Papadimitriou1994]. Specifically, we show that
constrained-SIS (cSIS), a generalized version of the well-known Short Integer
Solution problem (SIS) from lattice-based cryptography, is PPP-complete.
In order to give intuition behind our reduction for constrained-SIS, we
identify another PPP-complete problem with a circuit in the input but closely
related to lattice problems. We call this problem BLICHFELDT and it is the
computational problem associated with Blichfeldt's fundamental theorem in the
theory of lattices.
Building on the inherent connection of PPP with collision-resistant hash
functions, we use our completeness result to construct the first natural hash
function family that captures the hardness of all collision-resistant hash
functions in a worst-case sense, i.e. it is natural and universal in the
worst-case. The close resemblance of our hash function family with SIS, leads
us to the first candidate collision-resistant hash function that is both
natural and universal in an average-case sense.
Finally, our results enrich our understanding of the connections between PPP,
lattice problems and other concrete cryptographic assumptions, such as the
discrete logarithm problem over general groups
Time-Memory Trade-Off for Lattice Enumeration in a Ball
Enumeration algorithms in lattices are a well-known technique for solving the Short Vector Problem (SVP) and improving
blockwise lattice reduction algorithms.
Here, we propose a new algorithm for enumerating lattice point in a ball of radius
in time , where is the length of the shortest vector in the lattice . Then, we show how
this method can be used for solving SVP and the Closest Vector Problem (CVP)
with approximation factor in a -dimensional lattice in time .
Previous algorithms for enumerating take super-exponential running time with polynomial memory. For instance,
Kannan algorithm takes time , however ours also requires exponential memory and we propose different time/memory tradeoffs.
Recently, Aggarwal, Dadush, Regev and Stephens-Davidowitz describe a randomized algorithm with running
time at STOC\u27 15 for solving SVP and approximation version of SVP and CVP at FOCS\u2715.
However, it is not possible to use a
time/memory tradeoff for their algorithms. Their main result presents an algorithm that samples an exponential
number of random vectors in a Discrete Gaussian distribution with width below the smoothing parameter of the lattice.
Our algorithm is related to the hill climbing of Liu, Lyubashevsky and Micciancio from
RANDOM\u27 06 to solve the bounding decoding problem with preprocessing. It has been later improved by Dadush,
Regev, Stephens-Davidowitz for solving the CVP with preprocessing problem at CCC\u2714. However the latter algorithm only looks for
one lattice vector while we show that we can enumerate all lattice vectors in a ball. Finally, in these papers, they use a
preprocessing to obtain a succinct representation of some lattice function. We show in a first step that we
can obtain the same information using an exponential-time algorithm based on a collision search algorithm similar
to the reduction of Micciancio and Peikert for the SIS problem with small modulus at CRYPTO\u27 13
Accurate Score Prediction for Dual-Sieve Attacks
The Dual-Sieve Attack on Learning with Errors (LWE), or more generally Bounded Distance Decoding (BDD), has seen many improvements in the recent years, and ultimately led to claims that it outperforms the primal attack against certain lattice-based schemes in the PQC standardization process organised by NIST. However, the work of Ducas--Pulles (Crypto \u2723) revealed that the so-called Independence Heuristic , which all recent dual attacks used, leads to wrong predictions in a contradictory regime, which is relevant for the security of cryptoschemes. More specifically, the stated distributions of scores for the actual solution and for incorrect candidates were both incorrect.
In this work, we propose to use the weaker heuristic that the output vectors of a lattice sieve are uniformly distributed in a ball. Under this heuristic, we give an analysis of the score distribution in the case of an error of fixed length. Integrating over this length, we extend this analysis to any radially distributed error, in particular the gaussian as a fix for the score distribution of the actual solution. This approach also provides a prediction for the score of incorrect candidates, using a ball as an approximation of the Voronoi cell of a lattice.
We compare the predicted score distributions to extensive experiments, and observe them to be qualitatively and quantitatively quite accurate. This constitutes a first step towards fixing the analysis of the dual-sieve attack: we can now accurately estimate false-positives and false-negatives. Now that the analysis is fixed, one may consider how to fix the attack itself, namely exploring the opportunities to mitigate a large number of false-positives
Round-Optimal Blind Signatures in the Plain Model from Classical and Quantum Standard Assumptions
Blind signatures, introduced by Chaum (Crypto’82), allows a user to obtain a signature on a message without
revealing the message itself to the signer. Thus far, all existing constructions of round-optimal blind signatures are
known to require one of the following: a trusted setup, an interactive assumption, or complexity leveraging. This
state-of-the-affair is somewhat justified by the few known impossibility results on constructions of round-optimal blind
signatures in the plain model (i.e., without trusted setup) from standard assumptions. However, since all of these
impossibility results only hold under some conditions, fully (dis)proving the existence of such round-optimal blind
signatures has remained open.
In this work, we provide an affirmative answer to this problem and construct the first round-optimal blind signature
scheme in the plain model from standard polynomial-time assumptions. Our construction is based on various standard
cryptographic primitives and also on new primitives that we introduce in this work, all of which are instantiable from
classical and post-quantum standard polynomial-time assumptions. The main building block of our scheme is a new
primitive called a blind-signature-conforming zero-knowledge (ZK) argument system. The distinguishing feature is that
the ZK property holds by using a quantum polynomial-time simulator against non-uniform classical polynomial-time
adversaries. Syntactically one can view this as a delayed-input three-move ZK argument with a reusable first message,
and we believe it would be of independent interest
Efficient Lattice-Based Blind Signatures via Gaussian One-Time Signatures
Lattice-based blind signature schemes have been receiving some recent attention lately. Earlier efficient 3-round schemes (Asiacrypt 2010, Financial Cryptography 2020) were recently shown to have mistakes in their proofs, and fixing them turned out to be extremely inefficient and limited the number of signatures that a signer could send to less than a dozen (Crypto 2020). In this work we propose a round-optimal, 2-round lattice-based blind signature scheme which produces signatures of length 150KB. The running time of the signing protocol is linear in the maximum number signatures that can be given out, and this limits the number of signatures that can be signed per public key. Nevertheless, the scheme is still quite efficient when the number of signatures is limited to a few dozen thousand, and appears to currently be the most efficient lattice-based candidate
Does the Dual-Sieve Attack on Learning with Errors even Work?
Guo and Johansson (ASIACRYPT 2021), and MATZOV (tech.~report 2022) have independently claimed improved attacks against various NIST lattice candidate by adding a Fast Fourier Transform (FFT) trick on top of the so-called Dual-Sieve attack. Recently, there was more follow up work in this line adding new practical improvements.
However, from a theoretical perspective, all of these works are painfully specific to Learning with Errors, while the principle of the Dual-Sieve attack is more general (Laarhoven & Walter, CT-RSA 2021). More critically, all of these works are based on heuristics that have received very little theoretical and experimental attention.
This work attempts to rectify the above deficiencies of the literature.
We first propose a generalization of the FFT trick by Guo and Johansson to arbitrary Bounded Distance Decoding instances. This generalization offers a new improvement to the attack.
We then theoretically explore the underlying heuristics and show that these are in contradiction with formal, unconditional theorems in some regimes, and with well-tested heuristics in other regimes. The specific instantiations of the recent literature fall into this second regime.
We confirm these contradictions with experiments, documenting several phenomena that are not predicted by the analysis, including a ``waterfall-floor\u27\u27 phenomenon, reminiscent of Low-Density Parity-Check decoding failures.
We conclude that the success probability of the recent Dual-Sieve-FFT attacks are presumably significantly overestimated. We further discuss the adequate way forward towards fixing the attack and its analysis
The Complexity of Public-Key Cryptography
We survey the computational foundations for public-key cryptography. We discuss the computational assumptions that have been used as bases for public-key encryption schemes, and the types of evidence we have for the veracity of these assumptions.
This survey/tutorial was published in the book Tutorials on the Foundations of Cryptography , dedicated to Oded Goldreich on his 60th birthday