59 research outputs found

    Frequency Domain Finite Field Arithmetic for Elliptic Curve Cryptography

    Get PDF
    Efficient implementation of the number theoretic transform(NTT), also known as the discrete Fourier transform(DFT) over a finite field, has been studied actively for decades and found many applications in digital signal processing. In 1971 Schonhage and Strassen proposed an NTT based asymptotically fast multiplication method with the asymptotic complexity O(m log m log log m) for multiplication of mm-bit integers or (m-1)st degree polynomials. Schonhage and Strassen\u27s algorithm was known to be the asymptotically fastest multiplication algorithm until Furer improved upon it in 2007. However, unfortunately, both algorithms bear significant overhead due to the conversions between the time and frequency domains which makes them impractical for small operands, e.g. less than 1000 bits in length as used in many applications. With this work we investigate for the first time the practical application of the NTT, which found applications in digital signal processing, to finite field multiplication with an emphasis on elliptic curve cryptography(ECC). We present efficient parameters for practical application of NTT based finite field multiplication to ECC which requires key and operand sizes as short as 160 bits in length. With this work, for the first time, the use of NTT based finite field arithmetic is proposed for ECC and shown to be efficient. We introduce an efficient algorithm, named DFT modular multiplication, for computing Montgomery products of polynomials in the frequency domain which facilitates efficient multiplication in GF(p^m). Our algorithm performs the entire modular multiplication, including modular reduction, in the frequency domain, and thus eliminates costly back and forth conversions between the frequency and time domains. We show that, especially in computationally constrained platforms, multiplication of finite field elements may be achieved more efficiently in the frequency domain than in the time domain for operand sizes relevant to ECC. This work presents the first hardware implementation of a frequency domain multiplier suitable for ECC and the first hardware implementation of ECC in the frequency domain. We introduce a novel area/time efficient ECC processor architecture which performs all finite field arithmetic operations in the frequency domain utilizing DFT modular multiplication over a class of Optimal Extension Fields(OEF). The proposed architecture achieves extension field modular multiplication in the frequency domain with only a linear number of base field GF(p) multiplications in addition to a quadratic number of simpler operations such as addition and bitwise rotation. With its low area and high speed, the proposed architecture is well suited for ECC in small device environments such as smart cards and wireless sensor networks nodes. Finally, we propose an adaptation of the Itoh-Tsujii algorithm to the frequency domain which can achieve efficient inversion in a class of OEFs relevant to ECC. This is the first time a frequency domain finite field inversion algorithm is proposed for ECC and we believe our algorithm will be well suited for efficient constrained hardware implementations of ECC in affine coordinates

    The Entropy Conundrum: A Solution Proposal

    Get PDF
    In 2004, physicist Mark Newman, along with biologist Michael Lachmann and computer scientist Cristopher Moore, showed that if electromagnetic radiation is used as a transmission medium, the most information-efficient format for a given 1-D signal is indistinguishable from blackbody radiation. Since many natural processes maximize the Gibbs-Boltzmann entropy, they should give rise to spectra indistinguishable from optimally efficient transmission. In 2008, computer scientist C.S. Calude and physicist K. Svozil proved that "Quantum Randomness" is not Turing computable. In 2013, academic scientist R.A. Fiorini confirmed Newman, Lachmann and Moore's result, creating analogous example for 2-D signal (image), as an application of CICT in pattern recognition and image analysis. Paradoxically if you don’t know the code used for the message you can’t tell the difference between an information-rich message and a random jumble of letters. This is an entropy conundrum to solve. Even the most sophisticated instrumentation system is completely unable to reliably discriminate so called "random noise" from any combinatorically optimized encoded message, which CICT called "deterministic noise". Entropy fundamental concept crosses so many scientific and research areas, but, unfortunately, even across so many different disciplines, scientists have not yet worked out a definitive solution to the fundamental problem of the logical relationship between human experience and knowledge extraction. So, both classic concept of entropy and system random noise should be revisited deeply at theoretical and operational level. A convenient CICT solution proposal will be presented

    Dynamic block encryption with self-authenticating key exchange

    Get PDF
    One of the greatest challenges facing cryptographers is the mechanism used for key exchange. When secret data is transmitted, the chances are that there may be an attacker who will try to intercept and decrypt the message. Having done so, he/she might just gain advantage over the information obtained, or attempt to tamper with the message, and thus, misguiding the recipient. Both cases are equally fatal and may cause great harm as a consequence. In cryptography, there are two commonly used methods of exchanging secret keys between parties. In the first method, symmetric cryptography, the key is sent in advance, over some secure channel, which only the intended recipient can read. The second method of key sharing is by using a public key exchange method, where each party has a private and public key, a public key is shared and a private key is kept locally. In both cases, keys are exchanged between two parties. In this thesis, we propose a method whereby the risk of exchanging keys is minimised. The key is embedded in the encrypted text using a process that we call `chirp coding', and recovered by the recipient using a process that is based on correlation. The `chirp coding parameters' are exchanged between users by employing a USB flash memory retained by each user. If the keys are compromised they are still not usable because an attacker can only have access to part of the key. Alternatively, the software can be configured to operate in a one time parameter mode, in this mode, the parameters are agreed upon in advance. There is no parameter exchange during file transmission, except, of course, the key embedded in ciphertext. The thesis also introduces a method of encryption which utilises dynamic blocks, where the block size is different for each block. Prime numbers are used to drive two random number generators: a Linear Congruential Generator (LCG) which takes in the seed and initialises the system and a Blum-Blum Shum (BBS) generator which is used to generate random streams to encrypt messages, images or video clips for example. In each case, the key created is text dependent and therefore will change as each message is sent. The scheme presented in this research is composed of five basic modules. The first module is the key generation module, where the key to be generated is message dependent. The second module, encryption module, performs data encryption. The third module, key exchange module, embeds the key into the encrypted text. Once this is done, the message is transmitted and the recipient uses the key extraction module to retrieve the key and finally the decryption module is executed to decrypt the message and authenticate it. In addition, the message may be compressed before encryption and decompressed by the recipient after decryption using standard compression tools

    Efficiently Processing Complex Queries in Sensor Networks

    Get PDF

    Asymmetric Image Encryption based on Cipher Matrices

    Get PDF
    In most of the cryptological methods, the encrypted data or the cipher texts maintain same statistics of the plain texts, whereas matrix encryption method does not keep the statistics of individual cipher texts. However, it maintains the statistics of block of characters of size m where m is the size of the key matrix. One of the important features of the cipher matrix in Residue Number System (RNS) is that it is highly dicult and time consuming to obtain its inverse by standard inverse algorithms. Matrix in RNS does not have all the eigen values as dened in complex eld. The eigen factors of a matrix is dened as the irreducible factors of the characteristic equation(eigen function). All the above properties are valid for cipher matrix in Galois Field. The public key is generated by using two types of matrices. One of these matrices is a self-invertible matrix or an orthonormal matrix in Galois eld whereas the other matrix is a diagonally dominant matrix. Matrix inversion is very dicult and time consuming when size of matrix and modulo number are large. The computational overhead in generalized Hill cipher can be reduced substantially by using self-invertible matrices. Self-invertible ma- trices uses less space compared to invertible matrices. In order to overcome this problem, p(modulo) is made very large so that there would be at least pn=2 possible matrices making it extremely dicult for the intruder to nd the key matrix. In this thesis several methods of generating self-invertible matrix are proposed. Orthogonal Transform is used in signal processing. Modular Orthogonal Trans- form such as Walsh, Hadamard, Discrete Cosine Transform, Discrete Sine Trans- form, Discrete Fourier Transform have been used for encryption of image. The orthogonal matrices can be used as asymmetric key for encryption. In this work various methods of generating orthogonal matrices have been proposed. Matrix having primitive polynomial as eigen factors is used resulting in robust encryp- tion. A novel operation called exponentiation and its inverse has been dened in this thesis. All the properties of this new operation have been analyzed in Zp. This operation is used for encryption of image. The original image can be obtained by using the same exponentiation operation. Chaotic sequence and chaotic signal generation is widely used in communica- tion. Two stages of image encryption scheme using chaotic sequence is proposed in this work. First stage of encryption by chaotic sequence generated in GF(p) and the second stge of encryption is carried out by one of the encryption methods discussed in the previous chapters. Standard images have been used for encryption during simulation. Keywords: Encryption, Decryption, Cipher matrix, Public key, Private key, Residue number system, Eigen function, self-invertible matrix, Orthogonal, Ga- lois Field, Exponentiation, Chaotic sequence

    Nonlinear approximation with redundant multi-component dictionaries

    Get PDF
    The problem of efficiently representing and approximating digital data is an open challenge and it is of paramount importance for many applications. This dissertation focuses on the approximation of natural signals as an organized combination of mutually connected elements, preserving and at the same time benefiting from their inherent structure. This is done by decomposing a signal onto a multi-component, redundant collection of functions (dictionary), built by the union of several subdictionaries, each of which is designed to capture a specific behavior of the signal. In this way, instead of representing signals as a superposition of sinusoids or wavelets many alternatives are available. In addition, since dictionaries we are interested in are overcomplete, the decomposition is non-unique. This gives us the possibility of adaptation, choosing among many possible representations the one which best fits our purposes. On the other hand, it also requires more complex approximation techniques whose theoretical decomposition capacity and computational load have to be carefully studied. In general, we aim at representing a signal with few and meaningful components. If we are able to represent a piece of information by using only few elements, it means that such elements can capture its main characteristics, allowing to compact the energy carried by a signal into the smallest number of terms. In such a framework, this work also proposes analysis methods which deal with the goal of considering the a priori information available when decomposing a structured signal. Indeed, a natural signal is not only an array of numbers, but an expression of a physical event about which we usually have a deep knowledge. Therefore, we claim that it is worth exploiting its structure, since it can be advantageous not only in helping the analysis process, but also in making the representation of such information more accessible and meaningful. The study of an adaptive image representation inspired and gave birth to this work. We often refer to images and visual information throughout the course of the dissertation. However, the proposed approximation setting extends to many different kinds of structured data and examples are given involving videos and electrocardiogram signals. An important part of this work is constituted by practical applications: first of all we provide very interesting results for image and video compression. Then, we also face the problem of signal denoising and, finally, promising achievements in the field of source separation are presented

    Wave Propagation in Materials for Modern Applications

    Get PDF
    In the recent decades, there has been a growing interest in micro- and nanotechnology. The advances in nanotechnology give rise to new applications and new types of materials with unique electromagnetic and mechanical properties. This book is devoted to the modern methods in electrodynamics and acoustics, which have been developed to describe wave propagation in these modern materials and nanodevices. The book consists of original works of leading scientists in the field of wave propagation who produced new theoretical and experimental methods in the research field and obtained new and important results. The first part of the book consists of chapters with general mathematical methods and approaches to the problem of wave propagation. A special attention is attracted to the advanced numerical methods fruitfully applied in the field of wave propagation. The second part of the book is devoted to the problems of wave propagation in newly developed metamaterials, micro- and nanostructures and porous media. In this part the interested reader will find important and fundamental results on electromagnetic wave propagation in media with negative refraction index and electromagnetic imaging in devices based on the materials. The third part of the book is devoted to the problems of wave propagation in elastic and piezoelectric media. In the fourth part, the works on the problems of wave propagation in plasma are collected. The fifth, sixth and seventh parts are devoted to the problems of wave propagation in media with chemical reactions, in nonlinear and disperse media, respectively. And finally, in the eighth part of the book some experimental methods in wave propagations are considered. It is necessary to emphasize that this book is not a textbook. It is important that the results combined in it are taken “from the desks of researchers“. Therefore, I am sure that in this book the interested and actively working readers (scientists, engineers and students) will find many interesting results and new ideas

    Center for Aeronautics and Space Information Sciences

    Get PDF
    This report summarizes the research done during 1991/92 under the Center for Aeronautics and Space Information Science (CASIS) program. The topics covered are computer architecture, networking, and neural nets

    Bayes meets Bach: applications of Bayesian statistics to audio restoration

    Get PDF
    Memoryless nonlinear distortion can be present in audio signals, from recording to reproduction: bad quality or amateurishly operated equipments, physically degraded media and low quality reproducing devices are some examples where nonlinearities can naturally appear. Another quite common defect in old recordings are the long pulses, caused in general by the reproduction of disks with deep scratches or severely degraded magnetic tapes. Such defects are characterized by an initial discontinuity in the waveform, followed by a low-frequency transient of long duration. In both cases audible artifacts can be created, causing an unpleasant experience to the listener. It is then important to develop techniques to mitigate such defects, having at hand only the degraded signal, in a way to recover the original signal. In this thesis, techniques to deal with both problems are presented: the restoration of nonlinearly degraded recordings is tackled in a Bayesian context, considering both autoregressive models and sparsity in the DCT domain for the original signal, as well as through a deterministic solution also based on sparsity; for the suppression of long pulses, a parametric approach is revisited with the addition of an efficient initialization procedure, and a nonparametric modeling via Gaussian process is also presented.Distorções não-lineares podem aparecer em sinais de áudio desde o momento da sua gravação até a posterior reprodução: equipamentos precários ou operados de maneira indevida, mídias fisicamente degradadas e baixa qualidade dos aparelhos de reprodução são somente alguns exemplos onde não-linearidades podem aparecer de modo natural. Outro defeito bastante comum em gravações antigas são os pulsos longos, em geral causados pela reprodução de discos com arranhões muito profundos ou fitas magnéticas severamente degradadas. Tais defeitos são caracterizados por uma descontinuidade inicial na forma de onda, seguida de um transitório de baixa frequência e longa duração. Em ambos os casos, artefatos auditivos podem ser criados, causando assim uma experiência ruim para o ouvinte. E importante então desenvolver técnicas para mitigar tais efeitos, tendo como base somente uma versão do sinal degradado, de modo a recuperar o sinal original não degradado. Nessa tese são apresentadas técnicas para lidar com esses dois problemas: o problema de restaurar gravações corrompidas com distorções não-lineares é abordado em um contexto bayesiano, considerando tanto modelos autorregressivos quanto de esparsidade no domínio da DCT para o sinal original, bem como por uma solução determinística também em usando esparsidade; para a supressão de pulsos longos, uma abordagem paramétrica é revisitada, junto com o acréscimo de um eficiente procedimento de inicialização, sendo também apresentada uma abordagem não-paramétricausando processos gaussianos
    corecore