2,158 research outputs found

    A comparison of different finite fields for elliptic curve cryptosystems

    Get PDF
    AbstractWe examine the relative efficiency of four methods for finite field representation in the context of elliptic curve cryptography (ECC). We conclude that a set of fields called the optimized extension fields (OEFs) give greater performance, even when used with affine coordinates, when compared against the type of fields recommended in the emerging ECC standards. Although this performance advantage is only marginal, and hence, there is probably no need to change the current standards to allow OEF fields in standards compliant implementations

    Aspects of hardware methodologies for the NTRU public-key cryptosystem

    Get PDF
    Cryptographic algorithms which take into account requirements for varying levels of security and reduced power consumption in embedded devices are now receiving additional attention. The NTRUEncrypt algorithm has been shown to provide certain advantages when designing low power and resource constrained systems, while still providing comparable security levels to higher complexity algorithms. The research presented in this thesis starts with an examination of the general NTRUEncrypt system, followed by a more practical examination with respect to the IEEE 1363.1 draft standard. In contrast to previous research, the focus is shifted away from specific optimizations but rather provides a study of many of the recommended practices and suggested optimizations with particular emphasis on polynomial arithmetic and parameter selection. Various methods are examined for storing, inverting and multiplying polynomials used in the system. Recommendations for algorithm and parameter selection are made regarding implementation in software and hardware with respect to the resources available. Although the underlying mathematical principles have not been significantly questioned, stable recommended practices are still being developed for the NTRUEncrypt system. As a further complication, recommended optimizations have come from various researchers and have been split between hardware and software implementations. In this thesis, a generic VHDL model is presented, based on the IEEE 1363.1 draft standard, which is designed for adaptation to software or hardware implementation while providing flexibility for changes in recommended practices

    High-Performance VLSI Architectures for Lattice-Based Cryptography

    Get PDF
    Lattice-based cryptography is a cryptographic primitive built upon the hard problems on point lattices. Cryptosystems relying on lattice-based cryptography have attracted huge attention in the last decade since they have post-quantum-resistant security and the remarkable construction of the algorithm. In particular, homomorphic encryption (HE) and post-quantum cryptography (PQC) are the two main applications of lattice-based cryptography. Meanwhile, the efficient hardware implementations for these advanced cryptography schemes are demanding to achieve a high-performance implementation. This dissertation aims to investigate the novel and high-performance very large-scale integration (VLSI) architectures for lattice-based cryptography, including the HE and PQC schemes. This dissertation first presents different architectures for the number-theoretic transform (NTT)-based polynomial multiplication, one of the crucial parts of the fundamental arithmetic for lattice-based HE and PQC schemes. Then a high-speed modular integer multiplier is proposed, particularly for lattice-based cryptography. In addition, a novel modular polynomial multiplier is presented to exploit the fast finite impulse response (FIR) filter architecture to reduce the computational complexity of the schoolbook modular polynomial multiplication for lattice-based PQC scheme. Afterward, an NTT and Chinese remainder theorem (CRT)-based high-speed modular polynomial multiplier is presented for HE schemes whose moduli are large integers

    Bank-interleaved cache or memory indexing does not require euclidean division

    Get PDF
    International audience! Abstract Concurrent access to bank-interleaved memory structure have been studied for decades, particularly in the context of vector supercomputer systems. It is still common belief that using a number of banks different from 2 n leads to insert a complex hardware including a non-trivial divider on the access path to the memory. In 1993, two independent studies [1], [2] were showing that through leveraging a very simple arithmetic result, the Chinese Remainder Theorem, this euclidean division is not needed when the number of banks is prime or simply odd. In the mid 90's, the interest for vector supercomputers faded and the research topic disappeared. The interest for bank-interleaved cache has reappeared recently [3] in the GPU context. In this short paper, we extend the result from [1] and we show that, regardless the number of banks: Bank-interleaved cache or memory indexing does not require euclidean division

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms
    • 

    corecore