1,296 research outputs found
An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices
In this paper, we study the Learning With Errors problem and its binary
variant, where secrets and errors are binary or taken in a small interval. We
introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on
a quantization step that generalizes and fine-tunes modulus switching. In
general this new technique yields a significant gain in the constant in front
of the exponent in the overall complexity. We illustrate this by solving p
within half a day a LWE instance with dimension n = 128, modulus ,
Gaussian noise and binary secret, using
samples, while the previous best result based on BKW claims a time
complexity of with samples for the same parameters. We then
introduce variants of BDD, GapSVP and UniqueSVP, where the target point is
required to lie in the fundamental parallelepiped, and show how the previous
algorithm is able to solve these variants in subexponential time. Moreover, we
also show how the previous algorithm can be used to solve the BinaryLWE problem
with n samples in subexponential time . This
analysis does not require any heuristic assumption, contrary to other algebraic
approaches; instead, it uses a variant of an idea by Lyubashevsky to generate
many samples from a small number of samples. This makes it possible to
asymptotically and heuristically break the NTRU cryptosystem in subexponential
time (without contradicting its security assumption). We are also able to solve
subset sum problems in subexponential time for density , which is of
independent interest: for such density, the previous best algorithm requires
exponential time. As a direct application, we can solve in subexponential time
the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201
Faster Enumeration-based Lattice Reduction:Root Hermite Factor k1/(2k) Time kk/8+o(k)
International audienc
Learning strikes again: The case of the DRS signature scheme
Lattice signature schemes generally require particular care when it comes to preventing secret information from leaking through signature transcript. For example, the Goldreich-Goldwasser-Halevi (GGH) signature scheme and the NTRUSign scheme were completely broken by the parallelepiped-learning attack of Nguyen and Regev (Eurocrypt 2006). Several heuristic countermeasures were also shown vulnerable to similar statistical attacks.At PKC 2008, Plantard, Susilo and Win proposed a new variant of GGH, informally arguing resistance to such attacks. Based on this variant, Plantard, Sipasseuth, Dumondelle and Susilo proposed a concrete signature scheme, called DRS, that has been accepted in the round 1 of the NIST post-quantum cryptography project.In this work, we propose yet another statistical attack and demonstrate a weakness of the DRS scheme: one can recover some partial information of the secret key from sufficiently many signatures. One difficulty is that, due to the DRS reduction algorithm, the relation between the statistical leak and the secret seems more intricate. We work around this difficulty by training a statistical model, using a few features that we designed according to a simple heuristic analysis.While we only recover partial information on the secret key, this information is easily exploited by lattice attacks, significantly decreasing their complexity. Concretely, we claim that, provided that signatures are available, the secret key may be recovered using BKZ-138 for the first set of DRS parameters submitted to the NIST. This puts the security level of this parameter set below 80-bits (maybe even 70-bits), to be compared to an original claim of 128-bits.</p
Out of Oddity â New Cryptanalytic Techniques Against Symmetric Primitives Optimized for Integrity Proof Systems
International audienceThe security and performance of many integrity proof systems like SNARKs, STARKs and Bulletproofs highly depend on the underlying hash function. For this reason several new proposals have recently been developed. These primitives obviously require an in-depth security evaluation, especially since their implementation constraints have led to less standard design approaches. This work compares the security levels offered by two recent families of such primitives, namely GMiMC and HadesMiMC. We exhibit low-complexity distinguishers against the GMiMC and HadesMiMC permutations for most parameters proposed in recently launched public challenges for STARK-friendly hash functions. In the more concrete setting of the sponge construction corresponding to the practical use in the ZK-STARK protocol, we present a practical collision attack on a round-reduced version of GMiMC and a preimage attack on some instances of HadesMiMC. To achieve those results, we adapt and generalize several cryptographic techniques to fields of odd characteristic
Quantum Lightning Never Strikes the Same State Twice
Public key quantum money can be seen as a version of the quantum no-cloning
theorem that holds even when the quantum states can be verified by the
adversary. In this work, investigate quantum lightning, a formalization of
"collision-free quantum money" defined by Lutomirski et al. [ICS'10], where
no-cloning holds even when the adversary herself generates the quantum state to
be cloned. We then study quantum money and quantum lightning, showing the
following results:
- We demonstrate the usefulness of quantum lightning by showing several
potential applications, such as generating random strings with a proof of
entropy, to completely decentralized cryptocurrency without a block-chain,
where transactions is instant and local.
- We give win-win results for quantum money/lightning, showing that either
signatures/hash functions/commitment schemes meet very strong recently proposed
notions of security, or they yield quantum money or lightning.
- We construct quantum lightning under the assumed multi-collision resistance
of random degree-2 systems of polynomials.
- We show that instantiating the quantum money scheme of Aaronson and
Christiano [STOC'12] with indistinguishability obfuscation that is secure
against quantum computers yields a secure quantum money schem
Exploring Trade-offs in Batch Bounded Distance Decoding
Algorithms for solving the Bounded Distance Decoding problem (BDD) are used for estimating the security of lattice-based cryptographic primitives, since these algorithms can be employed to solve variants of the Learning with Errors problem (LWE). In certain parameter regimes where the target vector is small and/or sparse, batches of BDD instances emerge from a combinatorial approach where several components of the target vector are guessed before decoding. In this work we explore trade-offs in solving ``Batch-BDD\u27\u27, and apply our techniques to the small-secret Learning with Errors problem. We compare our techniques to previous works which solve batches of BDD instances, such as the hybrid lattice-reduction and meet-in-the-middle attack. Our results are a mixed bag. We show that, in the ``enumeration setting\u27\u27 and with BKZ reduction, our techniques outperform a variant of the hybrid attack which does not consider time-memory trade-offs in the guessing phase for certain Round5 (17-bits out of 466), Round5-IoT (19-bits out of 240), and NTRU LPrime (23-bits out of 385) parameter sets. On the other hand, our techniques do not outperform the Hybrid Attack under standard, albeit unrealistic, assumptions. Finally, as expected, our techniques do not improve on previous works in the ``sieving setting\u27\u27 (under standard assumptions) where combinatorial attacks in general do not perform well
Practical Cryptanalysis of a Public-key Encryption Scheme Based on Non-linear Indeterminate Equations at SAC 2017
We investigate the security of a public-key encryption scheme, the Indeterminate Equation Cryptosystem (IEC), introduced by Akiyama, Goto, Okumura, Takagi, Nuida, and Hanaoka at SAC 2017 as postquantum cryptography. They gave two parameter sets PS1 (n,p,deg X,q) = (80,3,1,921601) and PS2 (n,p,deg X,q) = (80,3,2,58982400019).
The paper gives practical key-recovery and message-recovery attacks against those parameter sets of IEC through lattice basis-reduction algorithms. We exploit the fact that n = 80 is composite and adopt the idea of Gentry\u27s attack against NTRU-Composite (EUROCRYPT2001) to this setting. The summary of our attacks follows:
* On PS1, we recover 84 private keys from 100 public keys in 30â40 seconds per key.
* On PS1, we recover partial information of all message from 100 ciphertexts in a second per ciphertext.
* On PS2, we recover partial information of all message from 100 ciphertexts in 30 seconds per ciphertext.
Moreover, we also give message-recovery and distinguishing attacks against the parameter sets with prime n, say, n = 83. We exploit another subring to reduce the dimension of lattices in our lattice-based attacks and our attack succeeds in the case of deg X = 2.
* For PS2â (n,p,deg X,q) = (83,3,2,68339982247), we recover 7 messages from 10 random ciphertexts within 61,000 seconds \approx 17 hours per ciphertext.
* Even for larger n, we can fnd short vector from lattices to break the underlying assumption of IEC. In our experiment, we can found such vector within 330,000 seconds \approx 4 days for n = 113
Revisiting the Hardness of Binary Error LWE
Binary error LWE is the particular case of the learning with errors
(LWE)
problem in which errors are chosen in . It has various
cryptographic applications, and in particular, has been used to construct
efficient encryption schemes for use in constrained devices.
Arora and Ge showed that the problem can be solved in polynomial time given a number
of samples quadratic in the dimension . On the other hand, the
problem is known to be as hard as standard LWE given only slightly more
than samples.
In this paper, we first examine more generally how the
hardness of the problem varies with the number of available samples.
Under standard heuristics on
the Arora--Ge polynomial system, we show that, for any ,
binary error LWE can be solved in polynomial time
given samples. Similarly,
it can be solved in subexponential time given
samples, for .
As a second contribution, we also generalize the binary error LWE to
problem the case of a non-uniform error probability, and
analyze the hardness of the non-uniform
binary error LWE with respect to the error rate and the number of available samples.
We show that, for any error rate , non-uniform binary error LWE is also as hard as
worst-case lattice problems provided that the number of samples is
suitably restricted. This is a generalization of Micciancio and Peikert\u27s hardness proof for uniform binary error LWE.
Furthermore, we also discuss attacks on the problem when the number
of available samples is linear but significantly larger than , and
show that for sufficiently low error rates, subexponential or even
polynomial time attacks are possible
Tightly Secure Ring-LWE Based Key Encapsulation with Short Ciphertexts
We provide a tight security proof for an IND-CCA Ring-LWE based Key Encapsulation Mechanism that is derived from a generic construction of Dent (IMA Cryptography and Coding, 2003). Such a tight reduction is not known for the generic construction. The resulting scheme has shorter ciphertexts than can be achieved with other generic constructions of Dent or by using the well-known Fujisaki-Okamoto constructions (PKC 1999, Crypto 1999). Our tight security proof is obtained by reducing to the security of the underlying Ring-LWE problem, avoiding an intermediate reduction to a CPA-secure encryption scheme. The proof technique maybe of interest for other schemes based on LWE and Ring-LWE
Estimate All the {LWE, NTRU} Schemes!
We consider all LWE- and NTRU-based encryption, key encapsulation, and digital signature schemes proposed for standardisation as part of the Post-Quantum Cryptography process run by the US National Institute of Standards and Technology (NIST). In particular, we investigate the impact that different estimates for the asymptotic runtime of (block-wise) lattice reduction have on the predicted security of these schemes. Relying on the ``LWE estimator\u27\u27 of Albrecht et al., we estimate the cost of running primal and dual lattice attacks against every LWE-based scheme, using every cost model proposed as part of a submission. Furthermore, we estimate the security of the proposed NTRU-based schemes against the primal attack under all cost models for lattice reduction
- âŠ