1,981 research outputs found
On the Hardness of Learning With Errors with Binary Secrets
We give a simple proof that the decisional Learning With Errors (LWE) problem with binary secrets (and an arbitrary polynomial number of samples) is at least as hard as the standard LWE problem (with unrestricted, uniformly random secrets, and a bounded, quasi-linear number of samples). This proves that the binary-secret LWE distribution is pseudorandom, under standard worst-case complexity assumptions on lattice problems. Our results are similar to those proved by (Brakerski, Langlois, Peikert, Regev and Stehle, STOC 2013), but provide a shorter, more direct proof, and a small improvement in the noise growth of the reduction
SALSA VERDE: a machine learning attack on Learning With Errors with sparse small secrets
Learning with Errors (LWE) is a hard math problem used in post-quantum
cryptography. Homomorphic Encryption (HE) schemes rely on the hardness of the
LWE problem for their security, and two LWE-based cryptosystems were recently
standardized by NIST for digital signatures and key exchange (KEM). Thus, it is
critical to continue assessing the security of LWE and specific parameter
choices. For example, HE uses secrets with small entries, and the HE community
has considered standardizing small sparse secrets to improve efficiency and
functionality. However, prior work, SALSA and PICANTE, showed that ML attacks
can recover sparse binary secrets. Building on these, we propose VERDE, an
improved ML attack that can recover sparse binary, ternary, and narrow Gaussian
secrets. Using improved preprocessing and secret recovery techniques, VERDE can
attack LWE with larger dimensions () and smaller moduli (
for ), using less time and power. We propose novel architectures for
scaling. Finally, we develop a theory that explains the success of ML LWE
attacks.Comment: 18 pages, accepted to NeurIPS 202
An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices
In this paper, we study the Learning With Errors problem and its binary
variant, where secrets and errors are binary or taken in a small interval. We
introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on
a quantization step that generalizes and fine-tunes modulus switching. In
general this new technique yields a significant gain in the constant in front
of the exponent in the overall complexity. We illustrate this by solving p
within half a day a LWE instance with dimension n = 128, modulus ,
Gaussian noise and binary secret, using
samples, while the previous best result based on BKW claims a time
complexity of with samples for the same parameters. We then
introduce variants of BDD, GapSVP and UniqueSVP, where the target point is
required to lie in the fundamental parallelepiped, and show how the previous
algorithm is able to solve these variants in subexponential time. Moreover, we
also show how the previous algorithm can be used to solve the BinaryLWE problem
with n samples in subexponential time . This
analysis does not require any heuristic assumption, contrary to other algebraic
approaches; instead, it uses a variant of an idea by Lyubashevsky to generate
many samples from a small number of samples. This makes it possible to
asymptotically and heuristically break the NTRU cryptosystem in subexponential
time (without contradicting its security assumption). We are also able to solve
subset sum problems in subexponential time for density , which is of
independent interest: for such density, the previous best algorithm requires
exponential time. As a direct application, we can solve in subexponential time
the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201
SALSA: Attacking Lattice Cryptography with Transformers
Currently deployed public-key cryptosystems will be vulnerable to attacks by
full-scale quantum computers. Consequently, "quantum resistant" cryptosystems
are in high demand, and lattice-based cryptosystems, based on a hard problem
known as Learning With Errors (LWE), have emerged as strong contenders for
standardization. In this work, we train transformers to perform modular
arithmetic and combine half-trained models with statistical cryptanalysis
techniques to propose SALSA: a machine learning attack on LWE-based
cryptographic schemes. SALSA can fully recover secrets for small-to-mid size
LWE instances with sparse binary secrets, and may scale to attack real-world
LWE-based cryptosystems.Comment: Extended version of work published at NeurIPS 202
Continuous LWE is as Hard as LWE & Applications to Learning Gaussian Mixtures
We show direct and conceptually simple reductions between the classical
learning with errors (LWE) problem and its continuous analog, CLWE (Bruna,
Regev, Song and Tang, STOC 2021). This allows us to bring to bear the powerful
machinery of LWE-based cryptography to the applications of CLWE. For example,
we obtain the hardness of CLWE under the classical worst-case hardness of the
gap shortest vector problem. Previously, this was known only under quantum
worst-case hardness of lattice problems. More broadly, with our reductions
between the two problems, any future developments to LWE will also apply to
CLWE and its downstream applications.
As a concrete application, we show an improved hardness result for density
estimation for mixtures of Gaussians. In this computational problem, given
sample access to a mixture of Gaussians, the goal is to output a function that
estimates the density function of the mixture. Under the (plausible and widely
believed) exponential hardness of the classical LWE problem, we show that
Gaussian mixture density estimation in with roughly
Gaussian components given samples requires time
quasi-polynomial in . Under the (conservative) polynomial hardness of LWE,
we show hardness of density estimation for Gaussians for any
constant , which improves on Bruna, Regev, Song and Tang (STOC
2021), who show hardness for at least Gaussians under polynomial
(quantum) hardness assumptions.
Our key technical tool is a reduction from classical LWE to LWE with
-sparse secrets where the multiplicative increase in the noise is only
, independent of the ambient dimension
Finding Significant Fourier Coefficients: Clarifications, Simplifications, Applications and Limitations
Ideas from Fourier analysis have been used in cryptography for the last three
decades. Akavia, Goldwasser and Safra unified some of these ideas to give a
complete algorithm that finds significant Fourier coefficients of functions on
any finite abelian group. Their algorithm stimulated a lot of interest in the
cryptography community, especially in the context of `bit security'. This
manuscript attempts to be a friendly and comprehensive guide to the tools and
results in this field. The intended readership is cryptographers who have heard
about these tools and seek an understanding of their mechanics and their
usefulness and limitations. A compact overview of the algorithm is presented
with emphasis on the ideas behind it. We show how these ideas can be extended
to a `modulus-switching' variant of the algorithm. We survey some applications
of this algorithm, and explain that several results should be taken in the
right context. In particular, we point out that some of the most important bit
security problems are still open. Our original contributions include: a
discussion of the limitations on the usefulness of these tools; an answer to an
open question about the modular inversion hidden number problem
Robustness of the Learning with Errors Assumption
Starting with the work of Ishai-Sahai-Wagner and Micali-Reyzin, a new goal has been set within the theory of cryptography community, to design cryptographic primitives that are secure against large classes of side-channel attacks. Recently, many works have focused on designing various cryptographic primitives that are robust (retain security) even when the secret key is âleakyâ, under various intractability assumptions. In this work we propose to take a step back and ask a more basic question: which of our cryptographic assumptions (rather than cryptographic schemes) are robust in presence of leakage of their underlying secrets?
Our main result is that the hardness of the learning with error (LWE) problem implies its hardness with leaky secrets. More generally, we show that the standard LWE assumption implies that LWE is secure even if the secret is taken from an arbitrary distribution with sufficient entropy, and even in the presence of hard-to-invert auxiliary inputs. We exhibit various applications of this result.
1. Under the standard LWE assumption, we construct a symmetric-key encryption scheme that is robust to secret key leakage, and more generally maintains security even if the secret key is taken from an arbitrary distribution with sufficient entropy (and even in the presence of hard-to-invert auxiliary inputs).
2. Under the standard LWE assumption, we construct a (weak) obfuscator for the class of point functions with multi-bit output. We note that in most schemes that are known to be robust to leakage, the parameters of the scheme depend on the maximum leakage the system can tolerate, and hence the efficiency degrades with the maximum anticipated leakage, even if no leakage occurs at all! In contrast, the fact that we rely on a robust assumption allows us to construct a single symmetric-key encryption scheme, with parameters that are independent of the anticipated leakage, that is robust to any leakage (as long as the secret key has sufficient entropy left over). Namely, for any k < n (where n is the size of the secret key), if the secret key has only entropy k, then the security relies on the LWE assumption with secret size roughly k
A New Algorithm for Solving Ring-LPN with a Reducible Polynomial
The LPN (Learning Parity with Noise) problem has recently proved to be of
great importance in cryptology. A special and very useful case is the RING-LPN
problem, which typically provides improved efficiency in the constructed
cryptographic primitive. We present a new algorithm for solving the RING-LPN
problem in the case when the polynomial used is reducible. It greatly
outperforms previous algorithms for solving this problem. Using the algorithm,
we can break the Lapin authentication protocol for the proposed instance using
a reducible polynomial, in about 2^70 bit operations
A Framework for Efficient Adaptively Secure Composable Oblivious Transfer in the ROM
Oblivious Transfer (OT) is a fundamental cryptographic protocol that finds a
number of applications, in particular, as an essential building block for
two-party and multi-party computation. We construct a round-optimal (2 rounds)
universally composable (UC) protocol for oblivious transfer secure against
active adaptive adversaries from any OW-CPA secure public-key encryption scheme
with certain properties in the random oracle model (ROM). In terms of
computation, our protocol only requires the generation of a public/secret-key
pair, two encryption operations and one decryption operation, apart from a few
calls to the random oracle. In~terms of communication, our protocol only
requires the transfer of one public-key, two ciphertexts, and three binary
strings of roughly the same size as the message. Next, we show how to
instantiate our construction under the low noise LPN, McEliece, QC-MDPC, LWE,
and CDH assumptions. Our instantiations based on the low noise LPN, McEliece,
and QC-MDPC assumptions are the first UC-secure OT protocols based on coding
assumptions to achieve: 1) adaptive security, 2) optimal round complexity, 3)
low communication and computational complexities. Previous results in this
setting only achieved static security and used costly cut-and-choose
techniques.Our instantiation based on CDH achieves adaptive security at the
small cost of communicating only two more group elements as compared to the
gap-DH based Simplest OT protocol of Chou and Orlandi (Latincrypt 15), which
only achieves static security in the ROM
- âŠ