6 research outputs found

    Low Memory Attacks on Small Key CSIDH

    Get PDF
    Despite recent breakthrough results in attacking SIDH, the CSIDH protocol remains a secure post-quantum key exchange protocol with appealing properties. However, for obtaining efficient CSIDH instantiations one has to resort to small secret keys. In this work, we provide novel methods to analyze small key CSIDH, thereby introducing the representation method ---that has been successfully applied for attacking small secret keys in code- and lattice-based schemes--- also to the isogeny-based world. We use the recently introduced Restricted Effective Group Actions (REGA\mathsf{REGA}) to illustrate the analogy between CSIDH and Diffie-Hellman key exchange. This framework allows us to introduce a REGA-DLOG\mathsf{REGA}\text{-}\mathsf{DLOG} problem as a level of abstraction to computing isogenies between elliptic curves, analogous to the classic discrete logarithm problem. This in turn allows us to study REGA-DLOG\mathsf{REGA}\text{-}\mathsf{DLOG} with ternary key spaces such as {−1,0,1}n,{0,1,2}n\{-1, 0, 1\}^n, \{0,1,2\}^n and {−2,0,2}n\{-2,0,2\}^n, which lead to especially efficient, recently proposed CSIDH instantiations. The best classic attack on these key spaces is a Meet-in-the-Middle algorithm that runs in time 30.5n3^{0.5 n}, using also 30.5n3^{0.5 n} memory. We first show that REGA-DLOG\mathsf{REGA}\text{-}\mathsf{DLOG} with ternary key spaces {0,1,2}n\{0,1,2\}^n or {−2,0,2}n\{-2,0,2\}^n can be reduced to the ternary key space {−1,0,1}n\{-1,0,1\}^n. We further provide a heuristic time-memory tradeoff for REGA-DLOG\mathsf{REGA}\text{-}\mathsf{DLOG} with keyspace {−1,0,1}n\{-1,0,1\}^n based on Parallel Collision Search with memory requirement MM that under standard heuristics runs in time 30.75n/M0.53^{0.75 n}/M^{0.5} for all M≤3n/2M \leq 3^{n/2}. We then use the representation technique to heuristically improve to 30.675n/M0.53^{0.675n}/M^{0.5} for all M≤30.22nM \leq 3^{0.22 n}, and further provide more efficient time-memory tradeoffs for all M≤3n/2M \leq 3^{n/2}. Although we focus in this work on REGA-DLOG\mathsf{REGA}\text{-}\mathsf{DLOG} with ternary key spaces for showing its efficacy in providing attractive time-memory tradeoffs, we also show how to use our framework to analyze larger key spaces {−m,…,m}n\{-m, \ldots, m\}^n with m=2,3m = 2,3

    Classical and Quantum Algorithms for Variants of Subset-Sum via Dynamic Programming

    Get PDF
    Subset-Sum is an NP-complete problem where one must decide if a multiset of n integers contains a subset whose elements sum to a target value m. The best known classical and quantum algorithms run in time O?(2^{n/2}) and O?(2^{n/3}), respectively, based on the well-known meet-in-the-middle technique. Here we introduce a novel classical dynamic-programming-based data structure with applications to Subset-Sum and a number of variants, including Equal-Sums (where one seeks two disjoint subsets with the same sum), 2-Subset-Sum (a relaxed version of Subset-Sum where each item in the input set can be used twice in the summation), and Shifted-Sums, a generalization of both of these variants, where one seeks two disjoint subsets whose sums differ by some specified value. Given any modulus p, our data structure can be constructed in time O(np), after which queries can be made in time O(n) to the lists of subsets summing to any value modulo p. We use this data structure in combination with variable-time amplitude amplification and a new quantum pair finding algorithm, extending the quantum claw finding algorithm to the multiple solutions case, to give an O(2^{0.504n}) quantum algorithm for Shifted-Sums. This provides a notable improvement on the best known O(2^{0.773n}) classical running time established by Mucha et al. [Mucha et al., 2019]. We also study Pigeonhole Equal-Sums, a variant of Equal-Sums where the existence of a solution is guaranteed by the pigeonhole principle. For this problem we give faster classical and quantum algorithms with running time O?(2^{n/2}) and O?(2^{2n/5}), respectively

    Classical and Quantum Algorithms for Variants of Subset-Sum via Dynamic Programming

    Get PDF
    Subset-Sum is an NP-complete problem where one must decide if a multiset of n integers contains a subset whose elements sum to a target value m. The best known classical and quantum algorithms run in time O?(2^{n/2}) and O?(2^{n/3}), respectively, based on the well-known meet-in-the-middle technique. Here we introduce a novel classical dynamic-programming-based data structure with applications to Subset-Sum and a number of variants, including Equal-Sums (where one seeks two disjoint subsets with the same sum), 2-Subset-Sum (a relaxed version of Subset-Sum where each item in the input set can be used twice in the summation), and Shifted-Sums, a generalization of both of these variants, where one seeks two disjoint subsets whose sums differ by some specified value. Given any modulus p, our data structure can be constructed in time O(np), after which queries can be made in time O(n) to the lists of subsets summing to any value modulo p. We use this data structure in combination with variable-time amplitude amplification and a new quantum pair finding algorithm, extending the quantum claw finding algorithm to the multiple solutions case, to give an O(2^{0.504n}) quantum algorithm for Shifted-Sums. This provides a notable improvement on the best known O(2^{0.773n}) classical running time established by Mucha et al. [Mucha et al., 2019]. We also study Pigeonhole Equal-Sums, a variant of Equal-Sums where the existence of a solution is guaranteed by the pigeonhole principle. For this problem we give faster classical and quantum algorithms with running time O?(2^{n/2}) and O?(2^{2n/5}), respectively

    Memory-Efficient Attacks on Small LWE Keys

    Get PDF
    The LWE problem is one of the prime candidates for building the most efficient post-quantum secure public key cryptosystems. Many of those schemes, like Kyber, Dilithium or those belonging to the NTRU-family, such as NTRU-HPS, -HRSS, BLISS or GLP, make use of small max norm keys to enhance efficiency. The presumably best attack on these schemes is a hybrid attack, which combines combinatorial techniques and lattice reduction. While lattice reduction is not known to be able to exploit the small max norm choices, May recently showed (Crypto 2021) that such choices allow for more efficient combinatorial attacks. However, these combinatorial attacks suffer enormous memory requirements, which render them inefficient in realistic attack scenarios and, hence, make their general consideration when assessing security questionable. Therefore, more memory-efficient substitutes for these algorithms are needed. In this work, we provide new combinatorial algorithms for recovering small max norm LWE secrets using only a polynomial amount of memory. We provide analyses of our algorithms for secret key distributions of current NTRU, Kyber and Dilithium variants, showing that our new approach outperforms previous memory-efficient algorithms. For instance, considering uniformly random ternary secrets of length nn we improve the best known time complexity for polynomial memory algorithms from 21.063n2^{1.063n} down-to 20.926n2^{0.926n}. We obtain even larger gains for LWE secrets in {−m,…,m}n\{-m,\ldots,m\}^n with m=2,3m=2,3 as found in Kyber and Dilithium. For example, for uniformly random keys in {−2,…,2}n\{-2,\ldots,2\}^n as is the case for Dilithium we improve the previously best time from 21.742n2^{1.742n} down-to 21.282n2^{1.282n}. Our fastest algorithm incorporates various different algorithmic techniques, but at its heart lies a nested collision search procedure inspired by the Nested-Rho technique from Dinur, Dunkelman, Keller and Shamir (Crypto 2016). Additionally, we heavily exploit the representation technique originally introduced in the subset sum context to make our nested approach efficient

    New Time-Memory Trade-Offs for Subset Sum -- Improving ISD in Theory and Practice

    Get PDF
    We propose new time-memory trade-offs for the random subset sum problem defined on (a1,…,an,t)(a_1,\ldots,a_n,t) over Z2n\mathbb{Z}_{2^n}. Our trade-offs yield significant running time improvements for every fixed memory limit M≥20.091nM\geq2^{0.091n}. Furthermore, we interpolate to the running times of the fastest known algorithms when memory is not limited. Technically, our design introduces a pruning strategy to the construction by Becker-Coron-Joux (BCJ) that allows for an exponentially small success probability. We compensate for this reduced probability by multiple randomized executions. Our main improvement stems from the clever reuse of parts of the computation in subsequent executions to reduce the time complexity per iteration. As an application of our construction, we derive the first non-trivial time-memory trade-offs for Information Set Decoding (ISD) algorithms. Our new algorithms improve on previous (implicit) trade-offs asymptotically as well as practically. Moreover, our optimized implementation also improves on running time, due to reduced memory access costs. We demonstrate this by obtaining a new record computation in decoding quasi-cyclic codes (QC-3138). Using our newly obtained data points we then extrapolate the hardness of suggested parameter sets for the NIST PQC fourth round candidates McEliece, BIKE and HQC, lowering previous estimates by up to 6 bits and further increasing their reliability

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum
    corecore