47 research outputs found

    Microcomputer Algorithms for Prime Number Testing

    Get PDF
    This paper gives a survey of different methods of prime number testing. Emphasis has been given to algorithms based upon Fermat\u27s Theorem: if p is an odd prime number, then p divides ap-a. All of the computer programs described in this paper have been written for use on microcomputers and so the feasibility of using microcomputers is also discussed. Finally, numbers of various forms have been considered for primality with special attention given to Mersenne and Fermat numbers. It is hoped that some of the information contained in this paper may provide worthwhile enrichment ideas for mathematics educators

    Computationally efficient search for large primes

    Get PDF
    To satisfy the speed of communication and to meet the demand for the continuously larger prime numbers, the primality testing and prime numbers generating algorithms require continuous advancement. To find the most efficient algorithm, a need for a survey of methods arises. Concurrently, an urge for the analysis of algorithms\u27 performances emanates. The critical criteria in the analysis of the prime numbers generation are the number of probes, number of generated primes, and an average time required in producing one prime. Hence, the purpose of this thesis is to indicate the best performing algorithm. The survey the methods, establishment of the comparison criteria, and comparison of approaches are the required steps to find the best performing algorithm. In the first step of this research paper the methods were surveyed and classified using the approach described in Menezes [66]. Wifle chapter 2 sorted, described, compared, and summarized primality testing methods, chapter 3 sorted, described, compared, and summarized prime numbers generating methods. In the next step applying a uniform technique, the computer programs were written to the selected algorithms. The programs were installed on the Unix operating system, running on the Sun 5.8 server to perform the computer experiments. The computer experiments\u27 results pertaining to the selected algorithms, provided required parameters to compare the algorithms\u27 performances. The results from the computer experiments were tabulated to compare the parameters and to indicate the best performing algorithm. Survey of methods indicated that the deterministic and randomized are the main approaches in prime numbers generation. Random number generation found application in the cryptographic keys generation. Contemporaneously, a need for deterministically generated provable primes emerged in the code encryption, decryption, and in the other cryptographic areas. The analysis of algorithms\u27 performances indicated that the prime nurnbers generated through the randomized techniques required smaller number of probes. This is due to the method that eliminates the non-primes in the initial step, that pre-tests randomly generated primes for possible divisibility factors. Analysis indicated that the smaller number of probes increases algorithm\u27s efficiency. Further analysis indicated that a ratio of randomly generated primes to the expected number of primes, generated in the specific interval is smaller than the deterministically generated primes. In this comparison the Miller-Rabin\u27s and the Gordon\u27s algorithms that randomly generate primes were compared versus the SFA and the Sequences Containing Primes. The name Sequences Containing Primes algorithm is abbreviated in this thesis as 6kseq. In the interval [99000,1000001 the Miller Rabin method generated 57 out of 87 expected primes, the SFA algorithm generated 83 out of 87 approximated primes. The expected number of primes was computed using the approximation n/ln(n) presented by Menezes [66]. The average consumed time of originating one prime in the [99000, 100000] interval recorded 0.056 [s] for Miller-Rabin test, 0.0001 [s] for SFA, and 0.0003 [s] for 6kseq. The Gordon\u27s algorithm in the interval [1,100000] required 100578 probes and generated 32 out of 8686 expected number of primes. Algorithm Parametric Representation of Composite Twins and Generation of Prime and Quasi Prime Numbers invented by Doctor Verkhovsky [1081 verifies and generates primes and quasi primes using special mathematical constructs. This algorithm indicated best performance in the interval [1,1000] generating and verifying 3585 variances of provable primes or quasi primes. The Parametric Representation of Composite Twins algorithm consumed an average time per prime, or quasi prime of 0.0022315 [s]. The Parametric Representation of Composite Twins and Generation of Prime and Quasi Prime Numbers algorithm implements very unique method of testing both primes and quasi-primes. Because of the uniqueness of the method that verifies both primes and quasi-primes, this algorithm cannot be compared with the other primality testing or prime numbers generating algorithms. The ((a!)^2)*((-1^b) Function In Generating Primes algorithm [105] developed by Doctor Verkhovsky was compared versus extended Fermat algorithm. In the range of [1,10001 the [105] algorithm exhausted an average 0.00001 [s] per prime, originated 167 primes, while the extended Fermat algorithm also produced 167 primes, but consumed an average 0.00599 [s] per prime. Thus, the computer experiments and comparison of methods proved that the SFA algorithm is deterministic, that originates provable primes. The survey of methods and analysis of selected approaches indicated that the SFA sieve algorithm that sequentially generates primes is computationally efficient, indicated better performance considering the computational speed, the simplicity of method, and the number of generated primes in the specified intervals

    Mersenne numbers

    Get PDF
    These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject

    A Novel Method of Encryption using Modified RSA Algorithm and Chinese Remainder Theorem

    Get PDF
    Security can only be as strong as the weakest link. In this world of cryptography, it is now well established, that the weakest link lies in the implementation of cryptographic algorithms. This project deals with RSA algorithm implementation with and without Chinese Remainder Theorem and also using Variable Radix number System. In practice, RSA public exponents are chosen to be small which makes encryption and signature verification reasonably fast. Private exponents however should never be small for obvious security reasons. This makes decryption slow. One way to speed things up is to split things up, calculate modulo p and modulo q using Chinese Remainder Theorem. For smart cards which usually have limited computing power, this is a very important and useful technique. This project aims at implementing RSA algorithm using Chinese Remainder Theorem as well as to devise a modification using which it would be still harder to decrypt a given encrypted message by employing a Variable radix system in order to encrypt the given message at the first place

    Dynamic block encryption with self-authenticating key exchange

    Get PDF
    One of the greatest challenges facing cryptographers is the mechanism used for key exchange. When secret data is transmitted, the chances are that there may be an attacker who will try to intercept and decrypt the message. Having done so, he/she might just gain advantage over the information obtained, or attempt to tamper with the message, and thus, misguiding the recipient. Both cases are equally fatal and may cause great harm as a consequence. In cryptography, there are two commonly used methods of exchanging secret keys between parties. In the first method, symmetric cryptography, the key is sent in advance, over some secure channel, which only the intended recipient can read. The second method of key sharing is by using a public key exchange method, where each party has a private and public key, a public key is shared and a private key is kept locally. In both cases, keys are exchanged between two parties. In this thesis, we propose a method whereby the risk of exchanging keys is minimised. The key is embedded in the encrypted text using a process that we call `chirp coding', and recovered by the recipient using a process that is based on correlation. The `chirp coding parameters' are exchanged between users by employing a USB flash memory retained by each user. If the keys are compromised they are still not usable because an attacker can only have access to part of the key. Alternatively, the software can be configured to operate in a one time parameter mode, in this mode, the parameters are agreed upon in advance. There is no parameter exchange during file transmission, except, of course, the key embedded in ciphertext. The thesis also introduces a method of encryption which utilises dynamic blocks, where the block size is different for each block. Prime numbers are used to drive two random number generators: a Linear Congruential Generator (LCG) which takes in the seed and initialises the system and a Blum-Blum Shum (BBS) generator which is used to generate random streams to encrypt messages, images or video clips for example. In each case, the key created is text dependent and therefore will change as each message is sent. The scheme presented in this research is composed of five basic modules. The first module is the key generation module, where the key to be generated is message dependent. The second module, encryption module, performs data encryption. The third module, key exchange module, embeds the key into the encrypted text. Once this is done, the message is transmitted and the recipient uses the key extraction module to retrieve the key and finally the decryption module is executed to decrypt the message and authenticate it. In addition, the message may be compressed before encryption and decompressed by the recipient after decryption using standard compression tools

    A high-speed integrated circuit with applications to RSA Cryptography

    Get PDF
    Merged with duplicate record 10026.1/833 on 01.02.2017 by CS (TIS)The rapid growth in the use of computers and networks in government, commercial and private communications systems has led to an increasing need for these systems to be secure against unauthorised access and eavesdropping. To this end, modern computer security systems employ public-key ciphers, of which probably the most well known is the RSA ciphersystem, to provide both secrecy and authentication facilities. The basic RSA cryptographic operation is a modular exponentiation where the modulus and exponent are integers typically greater than 500 bits long. Therefore, to obtain reasonable encryption rates using the RSA cipher requires that it be implemented in hardware. This thesis presents the design of a high-performance VLSI device, called the WHiSpER chip, that can perform the modular exponentiations required by the RSA cryptosystem for moduli and exponents up to 506 bits long. The design has an expected throughput in excess of 64kbit/s making it attractive for use both as a general RSA processor within the security function provider of a security system, and for direct use on moderate-speed public communication networks such as ISDN. The thesis investigates the low-level techniques used for implementing high-speed arithmetic hardware in general, and reviews the methods used by designers of existing modular multiplication/exponentiation circuits with respect to circuit speed and efficiency. A new modular multiplication algorithm, MMDDAMMM, based on Montgomery arithmetic, together with an efficient multiplier architecture, are proposed that remove the speed bottleneck of previous designs. Finally, the implementation of the new algorithm and architecture within the WHiSpER chip is detailed, along with a discussion of the application of the chip to ciphering and key generation

    Factoring integers defined by second and third order recurrence relations

    Get PDF
    Factoring the first two-hundred and fifty Fibonacci numbers using just trial division would take an unreasonable amount of time. Instead the problem must be attacked using modern factorization algorithms. We look not only at the Fibonacci numbers, but also at factoring integers defined by other second and third order recurrence relations. Specifically we include the Fibonacci, Tribonacci and Lucas numbers. We have verified the known factorizations of first 382 Fibonacci numbers and the first 185 Lucas numbers, we also completely factored the first 311 Tribonacci numbers

    Cryptography and number theory in the classroom -- Contribution of cryptography to mathematics teaching

    Get PDF
    Cryptography fascinates people of all generations and is increasingly presented as an example for the relevance and application of the mathematical sciences. Indeed, many principles of modern cryptography can be described at a secondary school level. In this context, the mathematical background is often only sparingly shown. In the worst case, giving mathematics this character of a tool reduces the application of mathematical insights to the message ”cryptography contains math”. This paper examines the question as to what else cryptography can offer to mathematics education. Using the RSA cryptosystem and related content, specific mathematical competencies are highlighted that complement standard teaching, can be taught with cryptography as an example, and extend and deepen key mathematical concepts
    corecore