16 research outputs found

    Parallel improved Schnorr-Euchner enumeration SE++ for the CVP and SVP

    Get PDF
    The Closest Vector Problem (CVP) and the Shortest Vector Problem (SVP) are prime problems in lattice-based cryptanalysis, since they underpin the security of many lattice-based cryptosystems. Despite the importance of these problems, there are only a few CVP-solvers publicly available, and their scalability was never studied. This paper presents a scalable implementation of an enumeration-based CVP-solver for multi-cores, which can be easily adapted to solve the SVP. In particular, it achieves super-linear speedups in some instances on up to 8 cores and almost linear speedups on 16 cores when solving the CVP on a 50-dimensional lattice. Our results show that enumeration-based CVP-solvers can be parallelized as effectively as enumeration-based solvers for the SVP, based on a comparison with a state of the art SVP-solver. In addition, we show that we can optimize the SVP variant of our solver in such a way that it becomes 35%-60% faster than the fastest enumeration-based SVP-solver to date

    Parallel improved Schnorr-Euchner enumeration SE++ on shared and distributed memory systems, with and without extreme pruning

    Get PDF
    The security of lattice-based cryptography relies on the hardness of problems based on lattices, such as the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). This paper presents two parallel implementations for the SE++ with and without extreme pruning. The SE++ is an enumeration-based CVP-solver, which can be easily adapted to solve the SVP. We improved the SVP version of the SE++ with an optimization that avoids symmetric branches, improving its performance by a factor of ≈ 50%, and applied the extreme pruning technique to this improved version. The extreme pruning technique is the fastest way to compute the SVP with enumeration known to date. It solves the SVP for lattices in much higher dimensions in less time than implementations without extreme pruning. Our parallel implementation of the SE++ with extreme pruning targets distributed memory multi-core CPU systems, while our SE++ without extreme pruning is designed for shared memory multi-core CPU systems. These implementations address load balancing problems for optimal performance, with a master-slave mechanism on the distributed memory implementation, and specific bounds for task creation on the shared memory implementation. The parallel implementation for the SE++ without extreme pruning scales linearly for up to 8 threads and almost linearly for 16 threads. In addition, it also achieves super-linear speedups on some instances, as the workload may be shortened, since some threads may find shorter vectors at earlier points in time, compared to the sequential implementation. Tests with our Improved SE++ implementation showed that it outperforms the state of the art implementation by a factor of between 35% and 60%, while maintaining a scalability similar to the SE++ implementation. Our parallel implementation of the SE++ with extreme pruning achieves linear speedups for up to 8 (working) processes and speedups of up to 13x for 16 (working) processes(undefined)info:eu-repo/semantics/publishedVersio

    LUSA: the HPC library for lattice-based cryptanalysis

    Get PDF
    This paper introduces LUSA - the Lattice Unified Set of Algorithms library - a C++ library that comprises many high performance, parallel implementations of lattice algorithms, with particular focus on lattice-based cryptanalysis. Currently, LUSA offers algorithms for lattice reduction and the SVP. % and the CVP. LUSA was designed to be 1) simple to install and use, 2) have no other dependencies, 3) be designed specifically for lattice-based cryptanalysis, including the majority of the most relevant algorithms in this field and 4) offer efficient, parallel and scalable methods for those algorithms. LUSA explores paralellism mainly at the thread level, being based on OpenMP. However the code is also written to be efficient at the cache and operation level, taking advantage of carefully sorted data structures and data level parallelism. This paper shows that LUSA delivers these promises, by being simple to use while consistently outperforming its counterparts, such as NTL, plll and fplll, and offering scalable, parallel implementations of the most relevant algorithms to date, which are currently not available in other libraries

    Reduction algorithms for the cryptanalysis of lattice based asymmetrical cryptosystems

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Computer Engineering, Izmir, 2008Includes bibliographical references (leaves: 79-91)Text in English; Abstract: Turkish and Englishxi, 119 leavesThe theory of lattices has attracted a great deal of attention in cryptology in recent years. Several cryptosystems are constructed based on the hardness of the lattice problems such as the shortest vector problem and the closest vector problem. The aim of this thesis is to study the most commonly used lattice basis reduction algorithms, namely Lenstra Lenstra Lovasz (LLL) and Block Kolmogorov Zolotarev (BKZ) algorithms, which are utilized to approximately solve the mentioned lattice based problems.Furthermore, the most popular variants of these algorithms in practice are evaluated experimentally by varying the common reduction parameter delta in order to propose some practical assessments about the effect of this parameter on the process of basis reduction.These kind of practical assessments are believed to have non-negligible impact on the theory of lattice reduction, and so the cryptanalysis of lattice cryptosystems, due to thefact that the contemporary nature of the reduction process is mainly controlled by theheuristics

    Lattice sampling algorithms for communications

    No full text
    In this thesis, we investigate the problem of decoding for wireless communications from the perspective of lattice sampling. In particular, computationally efficient lattice sampling algorithms are exploited to enhance the system performance, which enjoys the system tradeoff between performance and complexity through the sample size. Based on this idea, several novel lattice sampling algorithms are presented in this thesis. First of all, in order to address the inherent issues in the random sampling, derandomized sampling algorithm is proposed. Specifically, by setting a probability threshold to sample candidates, the whole sampling procedure becomes deterministic, leading to considerable performance improvement and complexity reduction over to the randomized sampling. According to the analysis and optimization, the correct decoding radius is given with the optimized parameter setting. Moreover, the upper bound on the sample size, which corresponds to near-maximum likelihood (ML) performance, is also derived. After that, the proposed derandomized sampling algorithm is introduced into the soft-output decoding of MIMO bit-interleaved coded modulation (BICM) systems to further improve the decoding performance. According to the demonstration, we show that the derandomized sampling algorithm is able to achieve the near-maximum a posteriori (MAP) performance in the soft-output decoding. We then extend the well-known Markov Chain Monte Carlo methods into the samplings from lattice Gaussian distribution, which has emerged as a common theme in lattice coding and decoding, cryptography, mathematics. We firstly show that the statistical Gibbs sampling is capable to perform the lattice Gaussian sampling. Then, a more efficient algorithm referred to as Gibbs-Klein sampling is proposed, which samples multiple variables block by block using Klein’s algorithm. After that, for the sake of convergence rate, we introduce the conventional statistical Metropolis-Hastings (MH) sampling into lattice Gaussian distributions and three MH-based sampling algorithms are then proposed. The first one, named as MH multivariate sampling algorithm, is demonstrated to have a faster convergence rate than Gibbs-Klein sampling. Next, the symmetrical distribution generated by Klein’s algorithm is taken as the proposal distribution, which offers an efficient way to perform the Metropolis sampling over high-dimensional models. Finally, the independent Metropolis-Hastings-Klein (MHK) algorithm is proposed, where the Markov chain arising from it is proved to converge to the stationary distribution exponentially fast. Furthermore, its convergence rate can be explicitly calculated in terms of the theta series, making it possible to predict the exact mixing time of the underlying Markov chain.Open Acces

    NewHope: A Mobile Implementation of a Post-Quantum Cryptographic Key Encapsulation Mechanism

    Get PDF
    NIST anticipates the appearance of large-scale quantum computers by 2036 [34], which will threaten widely used asymmetric algorithms, National Institute of Standards and Technology (NIST) launched a Post-Quantum Cryptography Standardization Project to find quantum-secure alternatives. NewHope post-quantum cryptography (PQC) key encapsulation mechanism (KEM) is the only Round 2 candidate to simultaneously achieve small key values through the use of a security problem with sufficient confidence its security, while mitigating any known vulnerabilities. This research contributes to NIST project’s overall goal by assessing the platform flexibility and resource requirements of NewHope KEMs on an Android mobile device. The resource requirements analyzed are transmission size as well as scheme runtime, central processing unit (CPU), memory, and energy usage. Results from each NewHope KEM instantiations are compared amongst each other, to a baseline application, and to results from previous work. NewHope PQC KEM was demonstrated to have sufficient flexibility for mobile implementation, competitive performance with other PQC KEMs, and to have competitive scheme runtime with current key exchange algorithms

    Random Sampling Revisited: Lattice Enumeration with Discrete Pruning

    Get PDF
    International audienceIn 2003, Schnorr introduced Random sampling to find very short lattice vectors, as an alternative to enumeration. An improved variant has been used in the past few years by Kashiwabara et al. to solve the largest Darmstadt SVP challenges. However, the behaviour of random sampling and its variants is not well-understood: all analyses so far rely on a questionable heuristic assumption, namely that the lattice vectors produced by some algorithm are uniformly distributed over certain parallelepipeds. In this paper, we introduce lattice enumeration with discrete pruning, which generalizes random sampling and its variants, and provides a novel geometric description based on partitions of the n-dimensional space. We obtain what is arguably the first sound analysis of random sampling, by showing how discrete pruning can be rigorously analyzed under the well-known Gaussian heuristic, in the same model as the Gama-Nguyen-Regev analysis of pruned enumeration from EUROCRYPT '10, albeit using different tools: we show how to efficiently compute the volume of the intersection of a ball with a box, and to efficiently approximate a large sum of many such volumes, based on statistical inference. Furthermore, we show how to select good parameters for discrete pruning by enumerating integer points in an ellip-soid. Our analysis is backed up by experiments and allows for the first time to reasonably estimate the success probability of random sampling and its variants, and to make comparisons with previous forms of pruned enumeration. Our work unifies random sampling and pruned enumeration and show that they are complementary of each other: both have different characteristics and offer different trade-offs to speed up enumeration

    Learning strikes again: The case of the DRS signature scheme

    Get PDF
    Lattice signature schemes generally require particular care when it comes to preventing secret information from leaking through signature transcript. For example, the Goldreich-Goldwasser-Halevi (GGH) signature scheme and the NTRUSign scheme were completely broken by the parallelepiped-learning attack of Nguyen and Regev (Eurocrypt 2006). Several heuristic countermeasures were also shown vulnerable to similar statistical attacks.At PKC 2008, Plantard, Susilo and Win proposed a new variant of GGH, informally arguing resistance to such attacks. Based on this variant, Plantard, Sipasseuth, Dumondelle and Susilo proposed a concrete signature scheme, called DRS, that has been accepted in the round 1 of the NIST post-quantum cryptography project.In this work, we propose yet another statistical attack and demonstrate a weakness of the DRS scheme: one can recover some partial information of the secret key from sufficiently many signatures. One difficulty is that, due to the DRS reduction algorithm, the relation between the statistical leak and the secret seems more intricate. We work around this difficulty by training a statistical model, using a few features that we designed according to a simple heuristic analysis.While we only recover partial information on the secret key, this information is easily exploited by lattice attacks, significantly decreasing their complexity. Concretely, we claim that, provided that signatures are available, the secret key may be recovered using BKZ-138 for the first set of DRS parameters submitted to the NIST. This puts the security level of this parameter set below 80-bits (maybe even 70-bits), to be compared to an original claim of 128-bits.</p

    Joint signal detection and channel estimation in rank-deficient MIMO systems

    Get PDF
    L'évolution de la prospère famille des standards 802.11 a encouragé le développement des technologies appliquées aux réseaux locaux sans fil (WLANs). Pour faire face à la toujours croissante nécessité de rendre possible les communications à très haut débit, les systèmes à antennes multiples (MIMO) sont une solution viable. Ils ont l'avantage d'accroître le débit de transmission sans avoir recours à plus de puissance ou de largeur de bande. Cependant, l'industrie hésite encore à augmenter le nombre d'antennes des portables et des accésoires sans fil. De plus, à l'intérieur des bâtiments, la déficience de rang de la matrice de canal peut se produire dû à la nature de la dispersion des parcours de propagation, ce phénomène est aussi occasionné à l'extérieur par de longues distances de transmission. Ce projet est motivé par les raisons décrites antérieurement, il se veut un étude sur la viabilité des transcepteurs sans fil à large bande capables de régulariser la déficience de rang du canal sans fil. On vise le développement des techniques capables de séparer M signaux co-canal, même avec une seule antenne et à faire une estimation précise du canal. Les solutions décrites dans ce document cherchent à surmonter les difficultés posées par le medium aux transcepteurs sans fil à large bande. Le résultat de cette étude est un algorithme transcepteur approprié aux systèmes MIMO à rang déficient
    corecore