77 research outputs found

    An Analysis of Primality Testing and Its Use in Cryptographic Applications

    Get PDF

    Computationally efficient search for large primes

    Get PDF
    To satisfy the speed of communication and to meet the demand for the continuously larger prime numbers, the primality testing and prime numbers generating algorithms require continuous advancement. To find the most efficient algorithm, a need for a survey of methods arises. Concurrently, an urge for the analysis of algorithms\u27 performances emanates. The critical criteria in the analysis of the prime numbers generation are the number of probes, number of generated primes, and an average time required in producing one prime. Hence, the purpose of this thesis is to indicate the best performing algorithm. The survey the methods, establishment of the comparison criteria, and comparison of approaches are the required steps to find the best performing algorithm. In the first step of this research paper the methods were surveyed and classified using the approach described in Menezes [66]. Wifle chapter 2 sorted, described, compared, and summarized primality testing methods, chapter 3 sorted, described, compared, and summarized prime numbers generating methods. In the next step applying a uniform technique, the computer programs were written to the selected algorithms. The programs were installed on the Unix operating system, running on the Sun 5.8 server to perform the computer experiments. The computer experiments\u27 results pertaining to the selected algorithms, provided required parameters to compare the algorithms\u27 performances. The results from the computer experiments were tabulated to compare the parameters and to indicate the best performing algorithm. Survey of methods indicated that the deterministic and randomized are the main approaches in prime numbers generation. Random number generation found application in the cryptographic keys generation. Contemporaneously, a need for deterministically generated provable primes emerged in the code encryption, decryption, and in the other cryptographic areas. The analysis of algorithms\u27 performances indicated that the prime nurnbers generated through the randomized techniques required smaller number of probes. This is due to the method that eliminates the non-primes in the initial step, that pre-tests randomly generated primes for possible divisibility factors. Analysis indicated that the smaller number of probes increases algorithm\u27s efficiency. Further analysis indicated that a ratio of randomly generated primes to the expected number of primes, generated in the specific interval is smaller than the deterministically generated primes. In this comparison the Miller-Rabin\u27s and the Gordon\u27s algorithms that randomly generate primes were compared versus the SFA and the Sequences Containing Primes. The name Sequences Containing Primes algorithm is abbreviated in this thesis as 6kseq. In the interval [99000,1000001 the Miller Rabin method generated 57 out of 87 expected primes, the SFA algorithm generated 83 out of 87 approximated primes. The expected number of primes was computed using the approximation n/ln(n) presented by Menezes [66]. The average consumed time of originating one prime in the [99000, 100000] interval recorded 0.056 [s] for Miller-Rabin test, 0.0001 [s] for SFA, and 0.0003 [s] for 6kseq. The Gordon\u27s algorithm in the interval [1,100000] required 100578 probes and generated 32 out of 8686 expected number of primes. Algorithm Parametric Representation of Composite Twins and Generation of Prime and Quasi Prime Numbers invented by Doctor Verkhovsky [1081 verifies and generates primes and quasi primes using special mathematical constructs. This algorithm indicated best performance in the interval [1,1000] generating and verifying 3585 variances of provable primes or quasi primes. The Parametric Representation of Composite Twins algorithm consumed an average time per prime, or quasi prime of 0.0022315 [s]. The Parametric Representation of Composite Twins and Generation of Prime and Quasi Prime Numbers algorithm implements very unique method of testing both primes and quasi-primes. Because of the uniqueness of the method that verifies both primes and quasi-primes, this algorithm cannot be compared with the other primality testing or prime numbers generating algorithms. The ((a!)^2)*((-1^b) Function In Generating Primes algorithm [105] developed by Doctor Verkhovsky was compared versus extended Fermat algorithm. In the range of [1,10001 the [105] algorithm exhausted an average 0.00001 [s] per prime, originated 167 primes, while the extended Fermat algorithm also produced 167 primes, but consumed an average 0.00599 [s] per prime. Thus, the computer experiments and comparison of methods proved that the SFA algorithm is deterministic, that originates provable primes. The survey of methods and analysis of selected approaches indicated that the SFA sieve algorithm that sequentially generates primes is computationally efficient, indicated better performance considering the computational speed, the simplicity of method, and the number of generated primes in the specified intervals

    Grained integers and applications to cryptography

    Get PDF
    To meet the requirements of the modern communication society, cryptographic techniques are of central importance. In modern cryptography, we try to build cryptographic primitives, whose security can be reduced to solving a particular number theoretic problem for which no fast algorithmic method is known by now. Thus, any advance in the understanding of the nature of such problems indirectly gives insight in the analysis of some of the most practical cryptographic techniques. In this work we analyze exactly this aspect much more deeply: How can we use some of the purely theoretical results in number theory to answer very practical questions on the security of widely used cryptographic algorithms and how can we use such results in concrete implementations? While trying to answer these kinds of security-related questions, we always think two-fold: From a cryptographic, security-ensuring perspective and from a cryptanalytic one. After we outlined -- with a special focus on the historical development of these results -- the necessary analytic and algorithmic foundations of number theory, we first delve into the question how point addition on certain elliptic curves can be done efficiently. The resulting formulas have their application in the cryptanalysis of crypto systems that are insecure if factoring integers can be done efficiently. The rest of the thesis is devoted to the study of integers, all of whose prime factors are neither too small nor too large. We show with the help of two applications how one can use the properties of such kinds of integers to answer very practical questions in the design and the analysis of cryptographic primitives: The optimization of a hardware-realization of the cofactorization step of the General Number Field Sieve and the analysis of different standardized key-generation algorithms

    The Factorization of the Ninth Fermat Number

    Get PDF

    Error Detection in Number-Theoretic and Algebraic Algorithms

    Get PDF
    CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms. Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories: 1. Correct and feasible -- the algorithm computes the correct result, 2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O, 3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O. Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues. Moreover, we show that typically, there will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result)

    Book Review: Inevitable randomness in discrete mathematics

    Full text link

    Accelerating the CM method

    Full text link
    Given a prime q and a negative discriminant D, the CM method constructs an elliptic curve E/\Fq by obtaining a root of the Hilbert class polynomial H_D(X) modulo q. We consider an approach based on a decomposition of the ring class field defined by H_D, which we adapt to a CRT setting. This yields two algorithms, each of which obtains a root of H_D mod q without necessarily computing any of its coefficients. Heuristically, our approach uses asymptotically less time and space than the standard CM method for almost all D. Under the GRH, and reasonable assumptions about the size of log q relative to |D|, we achieve a space complexity of O((m+n)log q) bits, where mn=h(D), which may be as small as O(|D|^(1/4)log q). The practical efficiency of the algorithms is demonstrated using |D| > 10^16 and q ~ 2^256, and also |D| > 10^15 and q ~ 2^33220. These examples are both an order of magnitude larger than the best previous results obtained with the CM method.Comment: 36 pages, minor edits, to appear in the LMS Journal of Computation and Mathematic
    • …
    corecore