55 research outputs found

    Faster tuple lattice sieving using spherical locality-sensitive filters

    Get PDF
    To overcome the large memory requirement of classical lattice sieving algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS 2016] studied tuple lattice sieving, where tuples instead of pairs of lattice vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017] recently improved upon their results for arbitrary tuple sizes, for example showing that a triple sieve can solve the shortest vector problem (SVP) in dimension dd in time 20.3717d+o(d)2^{0.3717d + o(d)}, using a technique similar to locality-sensitive hashing for finding nearest neighbors. In this work, we generalize the spherical locality-sensitive filters of Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near neighbor searching on dense data sets, and we apply these techniques to tuple lattice sieving to obtain even better time complexities. For instance, our triple sieve heuristically solves SVP in time 20.3588d+o(d)2^{0.3588d + o(d)}. For practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this shows that a triple sieve uses less space and less time than the current best near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology ePrint Archive 2017/228, available at https://ia.cr/2017/122

    Large-Scale Simulation of Shor's Quantum Factoring Algorithm

    Full text link
    Shor's factoring algorithm is one of the most anticipated applications of quantum computing. However, the limited capabilities of today's quantum computers only permit a study of Shor's algorithm for very small numbers. Here we show how large GPU-based supercomputers can be used to assess the performance of Shor's algorithm for numbers that are out of reach for current and near-term quantum hardware. First, we study Shor's original factoring algorithm. While theoretical bounds suggest success probabilities of only 3-4 %, we find average success probabilities above 50 %, due to a high frequency of "lucky" cases, defined as successful factorizations despite unmet sufficient conditions. Second, we investigate a powerful post-processing procedure, by which the success probability can be brought arbitrarily close to one, with only a single run of Shor's quantum algorithm. Finally, we study the effectiveness of this post-processing procedure in the presence of typical errors in quantum processing hardware. We find that the quantum factoring algorithm exhibits a particular form of universality and resilience against the different types of errors. The largest semiprime that we have factored by executing Shor's algorithm on a GPU-based supercomputer, without exploiting prior knowledge of the solution, is 549755813701 = 712321 * 771781. We put forward the challenge of factoring, without oversimplification, a non-trivial semiprime larger than this number on any quantum computing device.Comment: differs from the published version in formatting and style; open source code available at https://jugit.fz-juelich.de/qip/shorgp

    Shortest vector from lattice sieving: A few dimensions for free

    Get PDF
    Asymptotically, the best known algorithms for solving the Shortest Vector Problem (SVP) in a lattice of dimension n are sieve algorithms, which have heuristic complexity estimates ranging from (4/3)n+o(n) down to (3/2)n/2+o(n) when Locality Sensitive Hashing techniques are used. Sieve algorithms are however outperformed by pruned enumeration algorithms in practice by several orders of magnitude, despite the larger super-exponential asymptotical complexity 2Θ(n log n) of the latter. In this work, we show a concrete improvement of sieve-type algorithms. Precisely, we show that a few calls to the sieve algorithm in lattices of dimension less than n - d solves SVP in dimension n, where d = Θ(n/ log n). Although our improvement is only sub-exponential, its practical effect in relevant dimensions is quite significant. We implemented it over a simple sieve algorithm with (4/3)n+o(n) complexity, and it outperforms the best sieve algorithms from the literature by a factor of 10 in dimensions 7080. It performs less than an order of magnitude slower than pruned enumeration in the same range. By design, this improvement can also be applied to most other variants of sieve algorithms, including LSH sieve algorithms and tuple-sieve algorithms. In this light, we may expect sieve-techniques to outperform pruned enumeration in practice in the near future

    Lattice Sieving With G6K

    Get PDF
    Recent advances in quantum computing threaten the cryptography we use today. This has led to a need for new cryptographic algorithms that are safe against quantum computers. The American standardization organization NIST has now chosen four quantum-safe algorithms in their process of finding new cryptographic standards. Three out of the four algorithms are based on the hardness of finding a shortest vector in a lattice. The biggest threat to such schemes is lattice reduction. One of the best tools used for lattice reduction is the G6K framework. In this thesis, we study sieving algorithms and lattice reduction strategies implemented in G6K. After an introduction to cryptography, we go over the necessary preliminary lattice theory, important concepts, and related problems. Further, we look at lattice reduction where we study different approaches with a main focus on lattice sieving. We then explore the G6K framework, before finally performing some experiments using G6K. The results we get often depend on what type of lattice we are working on. Our experiments show that it is still possible to improve G6K for solving the shortest vector problem for some lattice types.Masteroppgave i informatikkINF399MAMN-INFMAMN-PRO

    On Generating Prime Numbers Efficiently

    Get PDF
    The prime numbers can be considered as the building blocks of natural numbers, having innumerable applications in number theory and cryptography. There exist multiple different sieving algorithms for the generation of prime numbers. In this thesis, an elementary modular result is utilized to construct an analytically useful generator function and its inverse function. The functions are used to generate a (log)log-linear time complexity prime sieving algorithm which is further optimized to be of linear time complexity. The constructed algorithms and their operation are studied and the linear implementations in JS, Python and C++ are compared to other prime sieves.Alkulukuja voidaan pitÀÀ luonnollisten lukujen rakennuspalikoina joilla on lukemattomia sovelluksia lukuteoriassa ja kryptografiassa. Alkulukujen luomiseen on olemassa useita erilaisia seulonta-algoritmeja. TÀssÀ opinnÀytetyössÀ kÀytetÀÀn modulaarista perustulosta analyyttisesti hyödyllisten kehitysfunktion ja sen kÀÀnteisfunktion luomiseen. Funktioiden avulla luodaan aikakompleksisuudeltaan (log)log-lineaarinen alkulukuseula, joka optimoidaan lineaariseksi. Rakennettuja algoritmeja ja niiden toimintaa tarkastellaan ja lineaarista implementaatiota JS, Python ja C++ ohjelmointikielillÀ verrataan toisiin alkulukuseuloihin

    Accelerating lattice reduction with FPGAs

    Get PDF
    International audienceWe describe an FPGA accelerator for the Kannan­–Fincke­–Pohst enumeration algorithm (KFP) solving the Shortest Lattice Vector Problem (SVP). This is the first FPGA implementation of KFP specifically targeting cryptographically relevant dimensions. In order to optimize this implementation, we theoretically and experimentally study several facets of KFP, including its efficient parallelization and its underlying arithmetic. Our FPGA accelerator can be used for both solving stand-alone instances of SVP (within a hybrid CPU­–FPGA compound) or myriads of smaller dimensional SVP instances arising in a BKZ-type algorithm. For devices of comparable costs, our FPGA implementation is faster than a multi-core CPU implementation by a factor around 2.12

    Progressive lattice sieving

    Get PDF
    Most algorithms for hard lattice problems are based on the principle of rank reduction: to solve a problem in a dd-dimensional lattice, one first solves one or more problem instances in a sublattice of rank d−1d - 1, and then uses this information to find a solution to the original problem. Existing lattice sieving methods, however, tackle lattice problems such as the shortest vector problem (SVP) directly, and work with the full-rank lattice from the start. Lattice sieving further seems to benefit less from starting with reduced bases than other methods, and finding an approximate solution almost takes as long as finding an exact solution. These properties currently set sieving apart from other methods. In this work we consider a progressive approach to lattice sieving, where we gradually introduce new basis vectors only when the sieve has stabilized on the previous basis vectors. This leads to improved (heuristic) guarantees on finding approximate shortest vectors, a bigger practical impact of the quality of the basis on the run-time, better memory management, a smoother and more predictable behavior of the algorithm, and significantly faster convergence - compared to traditional approaches, we save between a factor 2020 to 4040 in the time complexity for SVP

    An approach to task-based parallel programming for undergraduate students

    Get PDF
    This paper presents the description of a compulsory parallel programming course in the bachelor degree in Informatics Engineering at the Barcelona School of Informatics, Universitat Politùcnica de Catalunya UPC-BarcelonaTech. The main focus of the course is on the shared-memory programming paradigm, which facilitates the presentation of fundamental aspects and notions of parallel computing. Unlike the “traditional” loop-based approach, which is the focus of parallel programming courses in other universities, this course presents the parallel programming concepts using a task-based approach. Tasking allows students to explore a broader set of parallel decomposition strategies, including linear, iterative and recursive strategies, and their implementation using the current version of OpenMP (OpenMP 4.5), which offers mechanisms (pragmas and intrinsic functions) to easily map these strategies into parallel programs. Simple models to understand the benefits of a task decomposition and the trade-offs introduced by different kinds of overheads are included in the course, together with the use of tools that allow an easy exploration of different task decomposition strategies and their potential parallelism (Tareador) and instrumentation and analysis of task parallel executions on real machines (Extrae and Paraver).This work has been supported by the grant SEV-2015-0493 of the Severo Ochoa Program, awarded by the Spanish Gov- ernment, by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P) and by Generalitat de Catalunya (contracts 2014-MOOC-00057 and 2014-SGR-1051). We also thank the anonymous reviewers and editor for their comments during the review process, other professors that have been in- volved in the implementation of the course and Paul Carpenter at BSC for his corrections and suggestions to improve the text.Postprint (published version

    Topics in Lattice Sieving

    Get PDF
    • 

    corecore