13 research outputs found

    Pseudopowers and primality proving

    Get PDF
    It has been known since the 1930s that so-called pseudosquares yield a very powerful machinery for the primality testing of large integers N. In fact, assuming reasonable heuristics (which have been confirmed for numbers to 2^80) this gives a deterministic primality test in time O((lg N)^(3+o(1))), which many believe to be best possible. In the 1980s D.H. Lehmer posed a question tantamount to whether this could be extended to pseudo r-th powers. Very recently, this was accomplished for r=3. In fact, the results obtained indicate that r=3 might lead to an even more powerful algorithm than r=2. This naturally leads to the challenge if and how anything can be achieved for r>3. The extension from r = 2 to r = 3 relied on properties of the arithmetic of the Eisenstein ring of integers Z[\zeta_3], including the Law of Cubic Reciprocity. In this paper we present a generalization of our result for any odd prime r. The generalization is obtained by studying the properties of Gaussian and Jacobi sums in cyclotomic ring of integers, which are tools from which the r-th power Eisenstein Reciprocity Law is derived, rather than from the law itself. While r=3 seems to lead to a more efficient algorithm than r=2, we show that extending to any r>3 does not appear to lead to any further improvements

    On cyclotomic primality tests

    Get PDF
    In 1980, L. Adleman, C. Pomerance, and R. Rumely invented the first cyclotomicprimality test, and shortly after, in 1981, a simplified and more efficient versionwas presented by H.W. Lenstra for the Bourbaki Seminar. Later, in 2008, ReneSchoof presented an updated version of Lenstra\u27s primality test. This thesis presents adetailed description of the cyclotomic primality test as described by Schoof, along withsuggestions for implementation. The cornerstone of the test is a prime congruencerelation similar to Fermat\u27s \little theorem that involves Gauss or Jacobi sumscalculated over cyclotomic fields. The algorithm runs in very nearly polynomial time.This primality test is currently one of the most computationally efficient tests and isused by default for primality proving by the open source mathematics systems Sageand PARI/GP. It can quickly test numbers with thousands of decimal digits

    Computationally efficient search for large primes

    Get PDF
    To satisfy the speed of communication and to meet the demand for the continuously larger prime numbers, the primality testing and prime numbers generating algorithms require continuous advancement. To find the most efficient algorithm, a need for a survey of methods arises. Concurrently, an urge for the analysis of algorithms\u27 performances emanates. The critical criteria in the analysis of the prime numbers generation are the number of probes, number of generated primes, and an average time required in producing one prime. Hence, the purpose of this thesis is to indicate the best performing algorithm. The survey the methods, establishment of the comparison criteria, and comparison of approaches are the required steps to find the best performing algorithm. In the first step of this research paper the methods were surveyed and classified using the approach described in Menezes [66]. Wifle chapter 2 sorted, described, compared, and summarized primality testing methods, chapter 3 sorted, described, compared, and summarized prime numbers generating methods. In the next step applying a uniform technique, the computer programs were written to the selected algorithms. The programs were installed on the Unix operating system, running on the Sun 5.8 server to perform the computer experiments. The computer experiments\u27 results pertaining to the selected algorithms, provided required parameters to compare the algorithms\u27 performances. The results from the computer experiments were tabulated to compare the parameters and to indicate the best performing algorithm. Survey of methods indicated that the deterministic and randomized are the main approaches in prime numbers generation. Random number generation found application in the cryptographic keys generation. Contemporaneously, a need for deterministically generated provable primes emerged in the code encryption, decryption, and in the other cryptographic areas. The analysis of algorithms\u27 performances indicated that the prime nurnbers generated through the randomized techniques required smaller number of probes. This is due to the method that eliminates the non-primes in the initial step, that pre-tests randomly generated primes for possible divisibility factors. Analysis indicated that the smaller number of probes increases algorithm\u27s efficiency. Further analysis indicated that a ratio of randomly generated primes to the expected number of primes, generated in the specific interval is smaller than the deterministically generated primes. In this comparison the Miller-Rabin\u27s and the Gordon\u27s algorithms that randomly generate primes were compared versus the SFA and the Sequences Containing Primes. The name Sequences Containing Primes algorithm is abbreviated in this thesis as 6kseq. In the interval [99000,1000001 the Miller Rabin method generated 57 out of 87 expected primes, the SFA algorithm generated 83 out of 87 approximated primes. The expected number of primes was computed using the approximation n/ln(n) presented by Menezes [66]. The average consumed time of originating one prime in the [99000, 100000] interval recorded 0.056 [s] for Miller-Rabin test, 0.0001 [s] for SFA, and 0.0003 [s] for 6kseq. The Gordon\u27s algorithm in the interval [1,100000] required 100578 probes and generated 32 out of 8686 expected number of primes. Algorithm Parametric Representation of Composite Twins and Generation of Prime and Quasi Prime Numbers invented by Doctor Verkhovsky [1081 verifies and generates primes and quasi primes using special mathematical constructs. This algorithm indicated best performance in the interval [1,1000] generating and verifying 3585 variances of provable primes or quasi primes. The Parametric Representation of Composite Twins algorithm consumed an average time per prime, or quasi prime of 0.0022315 [s]. The Parametric Representation of Composite Twins and Generation of Prime and Quasi Prime Numbers algorithm implements very unique method of testing both primes and quasi-primes. Because of the uniqueness of the method that verifies both primes and quasi-primes, this algorithm cannot be compared with the other primality testing or prime numbers generating algorithms. The ((a!)^2)*((-1^b) Function In Generating Primes algorithm [105] developed by Doctor Verkhovsky was compared versus extended Fermat algorithm. In the range of [1,10001 the [105] algorithm exhausted an average 0.00001 [s] per prime, originated 167 primes, while the extended Fermat algorithm also produced 167 primes, but consumed an average 0.00599 [s] per prime. Thus, the computer experiments and comparison of methods proved that the SFA algorithm is deterministic, that originates provable primes. The survey of methods and analysis of selected approaches indicated that the SFA sieve algorithm that sequentially generates primes is computationally efficient, indicated better performance considering the computational speed, the simplicity of method, and the number of generated primes in the specified intervals

    Algorithms in algebraic number theory

    Get PDF
    In this paper we discuss the basic problems of algorithmic algebraic number theory. The emphasis is on aspects that are of interest from a purely mathematical point of view, and practical issues are largely disregarded. We describe what has been done and, more importantly, what remains to be done in the area. We hope to show that the study of algorithms not only increases our understanding of algebraic number fields but also stimulates our curiosity about them. The discussion is concentrated of three topics: the determination of Galois groups, the determination of the ring of integers of an algebraic number field, and the computation of the group of units and the class group of that ring of integers.Comment: 34 page

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms
    corecore