8 research outputs found

    Implementation and analysis of the generalised new Mersenne number transforms for encryption

    Get PDF
    PhD ThesisEncryption is very much a vast subject covering myriad techniques to conceal and safeguard data and communications. Of the techniques that are available, methodologies that incorporate the number theoretic transforms (NTTs) have gained recognition, specifically the new Mersenne number transform (NMNT). Recently, two new transforms have been introduced that extend the NMNT to a new generalised suite of transforms referred to as the generalised NMNT (GNMNT). These two new transforms are termed the odd NMNT (ONMNT) and the odd-squared NMNT (O2NMNT). Being based on the Mersenne numbers, the GNMNTs are extremely versatile with respect to vector lengths. The GNMNTs are also capable of being implemented using fast algorithms, employing multiple and combinational radices over one or more dimensions. Algorithms for both the decimation-in-time (DIT) and -frequency (DIF) methodologies using radix-2, radix-4 and split-radix are presented, including their respective complexity and performance analyses. Whilst the original NMNT has seen a significant amount of research applied to it with respect to encryption, the ONMNT and O2NMNT can utilise similar techniques that are proven to show stronger characteristics when measured using established methodologies defining diffusion. Analyses in diffusion using a small but reasonably sized vector-space with the GNMNTs will be exhaustively assessed and a comparison with the Rijndael cipher, the current advanced encryption standard (AES) algorithm, will be presented that will confirm strong diffusion characteristics. Implementation techniques using general-purpose computing on graphics processing units (GPGPU) have been applied, which are further assessed and discussed. Focus is drawn upon the future of cryptography and in particular cryptology, as a consequence of the emergence and rapid progress of GPGPU and consumer based parallel processing

    Implementation and Evaluation of Algorithmic Skeletons: Parallelisation of Computer Algebra Algorithms

    Get PDF
    This thesis presents design and implementation approaches for the parallel algorithms of computer algebra. We use algorithmic skeletons and also further approaches, like data parallel arithmetic and actors. We have implemented skeletons for divide and conquer algorithms and some special parallel loops, that we call ‘repeated computation with a possibility of premature termination’. We introduce in this thesis a rational data parallel arithmetic. We focus on parallel symbolic computation algorithms, for these algorithms our arithmetic provides a generic parallelisation approach. The implementation is carried out in Eden, a parallel functional programming language based on Haskell. This choice enables us to encode both the skeletons and the programs in the same language. Moreover, it allows us to refrain from using two different languages—one for the implementation and one for the interface—for our implementation of computer algebra algorithms. Further, this thesis presents methods for evaluation and estimation of parallel execution times. We partition the parallel execution time into two components. One of them accounts for the quality of the parallelisation, we call it the ‘parallel penalty’. The other is the sequential execution time. For the estimation, we predict both components separately, using statistical methods. This enables very confident estimations, although using drastically less measurement points than other methods. We have applied both our evaluation and estimation approaches to the parallel programs presented in this thesis. We haven also used existing estimation methods. We developed divide and conquer skeletons for the implementation of fast parallel multiplication. We have implemented the Karatsuba algorithm, Strassen’s matrix multiplication algorithm and the fast Fourier transform. The latter was used to implement polynomial convolution that leads to a further fast multiplication algorithm. Specially for our implementation of Strassen algorithm we have designed and implemented a divide and conquer skeleton basing on actors. We have implemented the parallel fast Fourier transform, and not only did we use new divide and conquer skeletons, but also developed a map-and-transpose skeleton. It enables good parallelisation of the Fourier transform. The parallelisation of Karatsuba multiplication shows a very good performance. We have analysed the parallel penalty of our programs and compared it to the serial fraction—an approach, known from literature. We also performed execution time estimations of our divide and conquer programs. This thesis presents a parallel map+reduce skeleton scheme. It allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property. We use this to implement the so-called ‘parallel repeated computation’, a special form of a speculative parallel loop. We have implemented two probabilistic primality tests: the Rabin–Miller test and the Jacobi sum test. We parallelised both with our approach. We analysed the task distribution and stated the fitting configurations of the Jacobi sum test. We have shown formally that the Jacobi sum test can be implemented in parallel. Subsequently, we parallelised it, analysed the load balancing issues, and produced an optimisation. The latter enabled a good implementation, as verified using the parallel penalty. We have also estimated the performance of the tests for further input sizes and numbers of processing elements. Parallelisation of the Jacobi sum test and our generic parallelisation scheme for the repeated computation is our original contribution. The data parallel arithmetic was defined not only for integers, which is already known, but also for rationals. We handled the common factors of the numerator or denominator of the fraction with the modulus in a novel manner. This is required to obtain a true multiple-residue arithmetic, a novel result of our research. Using these mathematical advances, we have parallelised the determinant computation using the Gauß elimination. As always, we have performed task distribution analysis and estimation of the parallel execution time of our implementation. A similar computation in Maple emphasised the potential of our approach. Data parallel arithmetic enables parallelisation of entire classes of computer algebra algorithms. Summarising, this thesis presents and thoroughly evaluates new and existing design decisions for high-level parallelisations of computer algebra algorithms

    Rader–Brenner Algorithm for Computing New Mersenne Number Transform

    No full text

    Putting Chinese natural knowledge to work in an eighteenth-century Swiss canton: the case of Dr Laurent Garcin

    Get PDF
    Symposium: S048 - Putting Chinese natural knowledge to work in the long eighteenth centuryThis paper takes as a case study the experience of the eighteenth-century Swiss physician, Laurent Garcin (1683-1752), with Chinese medical and pharmacological knowledge. A Neuchâtel bourgeois of Huguenot origin, who studied in Leiden with Hermann Boerhaave, Garcin spent nine years (1720-1729) in South and Southeast Asia as a surgeon in the service of the Dutch East India Company. Upon his return to Neuchâtel in 1739 he became primus inter pares in the small local community of physician-botanists, introducing them to the artificial sexual system of classification. He practiced medicine, incorporating treatments acquired during his travels. taught botany, collected rare plants for major botanical gardens, and contributed to the Journal Helvetique on a range of topics; he was elected a Fellow of the Royal Society of London, where two of his papers were read in translation and published in the Philosophical Transactions; one of these concerned the mangosteen (Garcinia mangostana), leading Linnaeus to name the genus Garcinia after Garcin. He was likewise consulted as an expert on the East Indies, exotic flora, and medicines, and contributed to important publications on these topics. During his time with the Dutch East India Company Garcin encountered Chinese medical practitioners whose work he evaluated favourably as being on a par with that of the Brahmin physicians, whom he particularly esteemed. Yet Garcin never went to China, basing his entire experience of Chinese medical practice on what he witnessed in the Chinese diaspora in Southeast Asia (the ‘East Indies’). This case demonstrates that there were myriad routes to Europeans developing an understanding of Chinese natural knowledge; the Chinese diaspora also afforded a valuable opportunity for comparisons of its knowledge and practice with other non-European bodies of medical and natural (e.g. pharmacological) knowledge.postprin

    Review of Particle Physics

    Get PDF
    The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 2,143 new measurements from 709 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. Particle properties and search limits are listed in Summary Tables. We give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders, Probability and Statistics. Among the 120 reviews are many that are new or heavily revised, including a new review on Machine Learning, and one on Spectroscopy of Light Meson Resonances. The Review is divided into two volumes. Volume 1 includes the Summary Tables and 97 review articles. Volume 2 consists of the Particle Listings and contains also 23 reviews that address specific aspects of the data presented in the Listings

    Review of Particle Physics

    Get PDF
    The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 2,143 new measurements from 709 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. Particle properties and search limits are listed in Summary Tables. We give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders, Probability and Statistics. Among the 120 reviews are many that are new or heavily revised, including a new review on Machine Learning, and one on Spectroscopy of Light Meson Resonances. The Review is divided into two volumes. Volume 1 includes the Summary Tables and 97 review articles. Volume 2 consists of the Particle Listings and contains also 23 reviews that address specific aspects of the data presented in the Listings. The complete Review (both volumes) is published online on the website of the Particle Data Group (pdg.lbl.gov) and in a journal. Volume 1 is available in print as the PDG Book. A Particle Physics Booklet with the Summary Tables and essential tables, figures, and equations from selected review articles is available in print, as a web version optimized for use on phones, and as an Android app.United States Department of Energy (DOE) DE-AC02-05CH11231government of Japan (Ministry of Education, Culture, Sports, Science and Technology)Istituto Nazionale di Fisica Nucleare (INFN)Physical Society of Japan (JPS)European Laboratory for Particle Physics (CERN)United States Department of Energy (DOE

    Review of Particle Physics

    Get PDF
    The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 2,143 new measurements from 709 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. Particle properties and search limits are listed in Summary Tables. We give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders, Probability and Statistics. Among the 120 reviews are many that are new or heavily revised, including a new review on Machine Learning, and one on Spectroscopy of Light Meson Resonances. The Review is divided into two volumes. Volume 1 includes the Summary Tables and 97 review articles. Volume 2 consists of the Particle Listings and contains also 23 reviews that address specific aspects of the data presented in the Listings. The complete Review (both volumes) is published online on the website of the Particle Data Group (pdg.lbl.gov) and in a journal. Volume 1 is available in print as the PDG Book. A Particle Physics Booklet with the Summary Tables and essential tables, figures, and equations from selected review articles is available in print, as a web version optimized for use on phones, and as an Android app

    Development of efficient algorithms for fast computation of discrete transforms

    No full text
    Transforms are widely used in diverse applications of science, engineering and technology. In particular, the field of digital signal processing has grown rapidly and achieved their performance through the development of fast algorithms for computing discrete transforms. This thesis focuses on the computation, generalisation and applications of four commonly used transforms, namely the new Mersenne number transform (NMNT), the discrete Fourier transform (DFT), the discreet Hartley transform (DHT), and the Walsh-Hadamard transform (WHT). As a result, several new algorithms are developed and two number theoretic transforms (NTTs) are introduced. The NMNT has been proved to be an important transform as it is utilised for error- free calculation of convolution and correlation with long transform lengths. In this thesis, new algorithms for the fast calculation of the NMNT based on the Rader- Brenner approach are developed, where the transform's structure is enhanced and the lowest multiplicative complexity among all existing NMNT algorithms is achieved. Two new NTTs defined modulo the Mersenne primes, named odd NMNT (ONMNT) and odd-squared NMNT (02NMNT), are introduced for incorporation into generalised NMNT (GNMNT) transforms which are categorised by type, with detailed instructions regarding their derivations. Development of their radix-2 and split radix algorithms, along with an example of the calculation of different types of convolutions, shows that these new transforms are suitable for wide range of applications. In order to take advantage of the simplest structural complexity provided by the radix-2 algorithm and the reduced computational complexity offered by a higher radix algorithm, a technique suitable for combining these two algorithms has been proposed, producing new fast Fourier transform (FFT) algorithms known as radix-Z' FFTs. In this thesis, a general decomposition method for developing these algorithms is introduced based on the decimation in time approach, applicable to one and multidimensional FFTs. The DHT has been proposed as an efficient alternative to the DFT for real data applications, because it is a real-to-real transform and possesses similar properties as the DFT. Based on the relationship between the DHT and complex-valued OFT, a unified 'FFT to FHT transition approach' is presented, providing a translation of FFT algorithms into their fast Hartley transform (FHT) counterparts. Using this approach, many new FHT algorithms in one and multidimensional cases are obtained. Finally, the combination of the WHT with the DFT has been proved to be a good candidate for improving the performances of orthogonal frequency division multiplexing (OFDM) systems. Therefore, part of this thesis deals with the Walsh- Hadamard-Fourier transform (WFT) that combines these transforms into a single orthogonal transform. Development of fast WFT (FWFT) algorithms has shown a significant reduction in the number of arithmetic operations and computer run times.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore