106 research outputs found

    Simulated division with approximate factoring for the multiple recursive generator with both unrestricted multiplier and non-mersenne prime modulus

    Get PDF
    AbstractThis paper focuses on devising a general and efficient way of generating random numbers for the multiple recursive generator with both unrestricted multiplier and non-Mersenne prime modulus. We propose a new algorithm that embeds the technique of approximate factoring into the simulated division method. The proposed new algorithm improves the decomposition method in terms of both the suitability for various word-sizes of the computers and the efficiency characteristics, such as the number of arithmetic operations required and the computational time. Empirical simulations are conducted to compare and evaluate the computational time of this algorithm with the decomposition method for various computers

    EFFICIENT COMPUTER SEARCH FOR MULTIPLE RECURSIVE GENERATORS

    Get PDF
    Pseudo-random numbers (PRNs) are the basis for almost any statistical simulation and thisdepends largely on the quality of the pseudo-random number generator(PRNG) used. In this study, we used some results from number theory to propose an efficient method to accelerate the computer search of super-order maximum period multiple recursive generators (MRGs). We conduct efficient computer searches and successfully found prime modulus p, and the associated order k; (k = 40751; k = 50551; k = 50873) such that R(k; p) is a prime. Using these values of ks, together with the generalized Mersenne prime algorithm, we found and listed many efficient, portable, and super-order MRGs with period lengths of approximately 10e 380278.1;10e 471730.6; and 10e 474729.3. In other words, using the generalized Mersenne prime algorithm, we extended some known results of some efficient, portable, and maximum period MRGs. In particular, the DX/DL/DS/DT large order generators are extended to super-order generators.For r k, super-order generators in MRG(k,p) are quite close to an ideal generator. Forr \u3e k; the r-dimensional points lie on a relatively small family of equidistant parallel hyperplanesin a high dimensional space. The goodness of these generators depend largely on the distance between these hyperplanes. For LCGs, MRGs, and other generators with lattice structures, the spectral test, which is a theoretical test that gives some measure of uniformity greater than the order k of the MRG, is the most perfect figure of merit. A drawback of the spectral test is its computational complexity. We used a simple and intuitive method that employs the LLL algorithm, to calculate the spectral test. Using this method, we extended the search for better DX-k-s-t farther than the known value of k = 25013: In particular, we searched and listed better super-order DX-k-s-t generators for k = 40751; k = 50551, and k = 50873.Finally, we examined, another special class of MRGs with many nonzero terms known as the DW-k generator. The DW-k generators iteration can be implemented efficiently and in parallel, using a k-th order matrix congruential generator (MCG) sharing the same characteristic polynomial. We extended some known results, by searching for super-order DW-k generators, using our super large k values that we obtained in this study. Using extensive computer searches, we found and listed some super-order, maximum period DW(k; A, B, C, p = 2e 31 - v) generators

    Design, Search and Implementation of Improved Large Order Multiple Recursive Generators and Matrix Congruential Generators

    Get PDF
    Large order, maximum period multiple recursive generators (MRGs) with few nonzero terms (e.g., DX-k-s generators) have become popular in the area of computer simulation. They are efficient, portable, have a long period, and have the nice property of high-dimensional equi-distribution. The latter two properties become more advantageous as k increases. The performance on the spectral test, a theoretical test that provides some measure of uniformity in dimensions beyond the MRG\u27s order k, could be improved by choosing multipliers that yield a better spectral test value. We propose a new method to compute the spectral test which is simple, intuitive, and efficient for some special classes of large order MRGs. Using this procedure, we list \u27\u27better\u27\u27 FMRG-k and DX-k-s generators with respect to performance on the spectral test. Even so, MRGs with few nonzero terms do not perform as well with respect to the spectral test as MRGs with many nonzero terms. However, MRGs with many nonzero terms can be inefficient or lack a feasible parallelization method, i.e., a method of producing substreams of (pseudo) random numbers that appear independent. To implement these MRGs efficiently and in parallel, we can use an equivalent recursion from another type of generator, the matrix congruential generator (MCG), a k-dimensional generalization of a first order linear recursion where the multipliers are embedded in a k by k matrix. When MRGs are used to construct MCGs and the recursion of the MCG is implemented k at a time for a k-dimensional vector sequence, then the MCG mimics k copies of a MRG in parallel with different starting seeds. Therefore, we propose a method for efficiently finding MRGs with many nonzero terms from an MRG with few nonzero terms and then give an efficient and parallel MCG implementation of these MRGs with many nonzero terms. This method works best for moderate order k. For large order MRGs with many nonzero terms, we propose a special class called DW-k. This special class has a characteristic polynomial that yields many nonzero terms and corresponds to an efficient and parallel MCG implementation

    Random number generation with multiple streams for sequential and parallel computing

    Get PDF
    International audienceWe provide a review of the state of the art on the design and implementation of random number generators (RNGs) for simulation, on both sequential and parallel computing environments. We focus on the need for multiple streams and substreams of random numbers, explain how they can be constructed and managed, review software libraries that offer them, and illustrate their usefulness via examples. We also review the basic quality criteria for good random number generators and their theoretical and empirical testing

    Hardware Implementation of Barrett Reduction Exploiting Constant Multiplication

    Get PDF
    The efficient realization of an Elliptic Curve Cryptosystem is contingent on the efficiency of scalar multiplication. These systems can be improved by optimizing the underlying finite field arithmetic operations which are the most costly such as modular reduction. There are elliptic curves over prime fields for which very efficient reduction formulas are possible due to the special structure of the moduli. For prime moduli of arbitrary form, however, use of general reduction formulas, such as Barrett's reduction algorithm, are necessary. Barrett's algorithm performs modular reduction efficiently by using multiplication as opposed to division, an operation which is generally expensive to realize in hardware. We note, however, that when an Elliptic Curve Cryptosystem is defined over a fixed prime field, all multiplication steps in Barrett's scheme can be realized through constant multiplications; this allows for further optimization. In this thesis, we study the influence using constant multipliers has on four different Barrett reduction variants targeting the Virtex-7 (xc7vx485tffg1157-1). We use the FloPoCo core generator to construct constant multiplier implementations for the different multiplication steps required in each scheme. Then, we create a hybrid constant multiplier circuit based on Karatsuba multiplication which uses smaller FloPoCo-generated base multipliers. It is shown that for certain multiplication steps, the hybrid design provides an improvement in the resource utilization of the constant multiplier circuit at the cost of an increase in the critical path delay. A performance comparison of different Barrett reduction circuits using different combinations of constant multiplier architectures is presented. Additionally, a fully pipelined implementation of each Barrett reduction variant is also designed capable of achieving operational frequencies in the range of 496-504MHz depending on the Barrett scheme considered. With the addition of a 256-bit pipelined Karatsuba multiplier circuit, we also present a compact and fully pipelined modular multiplier based on these Barrett architectures capable of achieving very high throughput compared to others in the literature without the use of embedded multipliers

    Data Fingerprinting -- Identifying Files and Tables with Hashing Schemes

    Get PDF
    Master's thesis in Computer scienceINTRODUCTION: Although hash functions are nothing new, these are not limited to cryptographic purposes. One important field is data fingerprinting. Here, the purpose is to generate a digest which serves as a fingerprint (or a license plate) that uniquely identifies a file. More recently, fuzzy fingerprinting schemes — which will scrap the avalanche effect in favour of detecting local changes — has hit the spotlight. The main purpose of this project is to find ways to classify text tables, and discover where potential changes or inconsitencies have happened. METHODS: Large parts of this report can be considered applied discrete mathematics — and finite fields and combinatorics have played an important part. Rabin’s fingerprinting scheme was tested extensively and compared against existing cryptographic algorithms, CRC and FNV. Moreover, a self-designed fuzzy hashing algorithm with the preliminary name No-Frills Hash has been created and tested against Nilsimsa and Spamsum. NFHash is based on Mersenne primes, and uses a sliding window to create a fuzzy hash. Futhermore, the usefullness of lookup tables (with partial seeds) were also explored. The fuzzy hashing algorithm has also been combined with a k-NN classifier to get an overview over it’s ability to classify files. In addition to NFHash, Bloom filters combined with Merkle Trees have been the most important part of this report. This combination will allow a user to see where a change was made, despite the fact that hash functions are one-way. Large parts of this project has dealt with the study of other open-source libraries and applications, such as Cassandra and SSDeep — as well as how bitcoins work. Optimizations have played a crucial role as well; different approaches to a problem might lead to the same solution, but resource consumption can be very different. RESULTS: The results have shown that the Merkle Tree-based approach can track changes to a table very quickly and efficiently, due to it being conservative when it comes to CPU resources. Moreover, the self-designed algorithm NFHash also does well in terms of file classification when it is coupled with a k-NN classifyer. CONCLUSION: Hash functions refers to a very diverse set of algorithms, and not just algorithms that serve a limited purpose. Fuzzy Fingerprinting Schemes can still be considered to be at their infant stage, but a lot has still happened the last ten years. This project has introduced two new ways to create and compare hashes that can be compared to similar, yet not necessarily identical files — or to detect if (and to what extent) a file was changed. Note that the algorithms presented here should be considered prototypes, and still might need some large scale testing to sort out potential flaw
    • …
    corecore