430 research outputs found

    On the Analysis of Public-Key Cryptologic Algorithms

    Get PDF
    The RSA cryptosystem introduced in 1977 by Ron Rivest, Adi Shamir and Len Adleman is the most commonly deployed public-key cryptosystem. Elliptic curve cryptography (ECC) introduced in the mid 80's by Neal Koblitz and Victor Miller is becoming an increasingly popular alternative to RSA offering competitive performance due the use of smaller key sizes. Most recently hyperelliptic curve cryptography (HECC) has been demonstrated to have comparable and in some cases better performance than ECC. The security of RSA relies on the integer factorization problem whereas the security of (H)ECC is based on the (hyper)elliptic curve discrete logarithm problem ((H)ECDLP). In this thesis the practical performance of the best methods to solve these problems is analyzed and a method to generate secure ephemeral ECC parameters is presented. The best publicly known algorithm to solve the integer factorization problem is the number field sieve (NFS). Its most time consuming step is the relation collection step. We investigate the use of graphics processing units (GPUs) as accelerators for this step. In this context, methods to efficiently implement modular arithmetic and several factoring algorithms on GPUs are presented and their performance is analyzed in practice. In conclusion, it is shown that integrating state-of-the-art NFS software packages with our GPU software can lead to a speed-up of 50%. In the case of elliptic and hyperelliptic curves for cryptographic use, the best published method to solve the (H)ECDLP is the Pollard rho algorithm. This method can be made faster using classes of equivalence induced by curve automorphisms like the negation map. We present a practical analysis of their use to speed up Pollard rho for elliptic curves and genus 2 hyperelliptic curves defined over prime fields. As a case study, 4 curves at the 128-bit theoretical security level are analyzed in our software framework for Pollard rho to estimate their practical security level. In addition, we present a novel many-core architecture to solve the ECDLP using the Pollard rho algorithm with the negation map on FPGAs. This architecture is used to estimate the cost of solving the Certicom ECCp-131 challenge with a cluster of FPGAs. Our design achieves a speed-up factor of about 4 compared to the state-of-the-art. Finally, we present an efficient method to generate unique, secure and unpredictable ephemeral ECC parameters to be shared by a pair of authenticated users for a single communication. It provides an alternative to the customary use of fixed ECC parameters obtained from publicly available standards designed by untrusted third parties. The effectiveness of our method is demonstrated with a portable implementation for regular PCs and Android smartphones. On a Samsung Galaxy S4 smartphone our implementation generates unique 128-bit secure ECC parameters in 50 milliseconds on average

    Optimization of Supersingular Isogeny Cryptography for Deeply Embedded Systems

    Get PDF
    Public-key cryptography in use today can be broken by a quantum computer with sufficient resources. Microsoft Research has published an open-source library of quantum-secure supersingular isogeny (SI) algorithms including Diffie-Hellman key agreement and key encapsulation in portable C and optimized x86 and x64 implementations. For our research, we modified this library to target a deeply-embedded processor with instruction set extensions and a finite-field coprocessor originally designed to accelerate traditional elliptic curve cryptography (ECC). We observed a 6.3-7.5x improvement over a portable C implementation using instruction set extensions and a further 6.0-6.1x improvement with the addition of the coprocessor. Modification of the coprocessor to a wider datapath further increased performance 2.6-2.9x. Our results show that current traditional ECC implementations can be easily refactored to use supersingular elliptic curve arithmetic and achieve post-quantum security

    Cofactorization on Graphics Processing Units

    Get PDF
    We show how the cofactorization step, a compute-intensive part of the relation collection phase of the number field sieve (NFS), can be farmed out to a graphics processing unit. Our implementation on a GTX 580 GPU, which is integrated with a state-of-the-art NFS implementation, can serve as a cryptanalytic co-processor for several Intel i7-3770K quad-core CPUs simultaneously. This allows those processors to focus on the memory-intensive sieving and results in more useful NFS-relations found in less time

    Summary of research in applied mathematics, numerical analysis, and computer sciences

    Get PDF
    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers

    LUSA: the HPC library for lattice-based cryptanalysis

    Get PDF
    This paper introduces LUSA - the Lattice Unified Set of Algorithms library - a C++ library that comprises many high performance, parallel implementations of lattice algorithms, with particular focus on lattice-based cryptanalysis. Currently, LUSA offers algorithms for lattice reduction and the SVP. % and the CVP. LUSA was designed to be 1) simple to install and use, 2) have no other dependencies, 3) be designed specifically for lattice-based cryptanalysis, including the majority of the most relevant algorithms in this field and 4) offer efficient, parallel and scalable methods for those algorithms. LUSA explores paralellism mainly at the thread level, being based on OpenMP. However the code is also written to be efficient at the cache and operation level, taking advantage of carefully sorted data structures and data level parallelism. This paper shows that LUSA delivers these promises, by being simple to use while consistently outperforming its counterparts, such as NTL, plll and fplll, and offering scalable, parallel implementations of the most relevant algorithms to date, which are currently not available in other libraries

    Optimization Capabilities for Crushing Plants

    Get PDF
    Responsible production and minimal consumption of resources are becoming competitive factors in the industry. The aggregates and minerals processing industries consist of multiple heavy mechanized industrial processes handling large volumes of materials and are energy-intensive. One such process is a crushing plant operation consisting of rock size reduction (comminution) and particle size separation (classification) processes. The objective of the crushing plant operation for the aggregates industry is to supply specific size fractions of rock material for infrastructure development, while the objective in minerals processing is to maximize material ore throughput below a target size fraction for the subsequent process. The operation of a crushing plant is complex and suffers variabilities during the process operation, resulting in a drive for optimization functionality development. Process knowledge and understanding are needed to make proactive decisions to enable operations to maintain and elevate performance levels. To examine the complex relationships and interdependencies of the physical processes of crushing plants, a simulation platform can be used at the design stage. Process simulation for crushing plants can be classified as either steady-state simulation or dynamic simulation. The steady-state simulation models are based on instantaneous mass balancing while the dynamic simulation models can capture the process change over time due to non-ideal operating conditions. Both simulation types can replicate the process performance at different fidelities for industrial applications but are limited in application for everyday operation. Most companies operating crushing plants are equipped with digital data-collection systems capturing continuous production data such as mass flow and power draw. The use of the production data for the daily decision-making process is still not utilized to its full potential. There are opportunities to integrate optimization functions with the simulation platform and digital data platforms to create decision-making functionality for everyday operation in a crushing plant. This thesis presents a multi-layered modular framework for the development of the optimization capabilities in a crushing plant aimed at achieving process optimization and process improvements. The optimization capabilities for crushing plants comprise a system solution with the two-fold application of 1) Utilizing the simulation platform for identification and exploration of operational settings based on the stakeholder’s need to generate knowledge about the process operation, 2) Assuring the reliability of the equipment model and production data to create validated process simulations that can be utilized for process optimization and performance improvements.During the iterative development work, multiple optimization methods such as multi-objective optimization (MOO) and multi-disciplinary optimization (MDO) are applied for process optimization. An adaptation of the ISO 22400 standard for the aggregates production process is performed and applied in dynamic simulations of crushing plants. A detailed optimization method for calibration and validation of process simulation and production data, especially for mass flow data, is presented. Standard optimization problem formulations for each of the applications are demonstrated, which is essential for the replicability of the application. The proposed framework poses a challenge in the future development of a large-scale integrated digital solution for realizing the potential of production data, simulation, and optimization. In conclusion, optimization capabilities are essential for the modernization of the decision-making process in crushing plant operations
    • …
    corecore