10 research outputs found

    VLSI architectures for high speed Fourier transform processing

    Get PDF

    The Unification and Decomposition of Processing Structures Using Lattice Theoretic Methods

    Get PDF
    The purpose of this dissertation is to demonstrate that lattice theoretic methods can be used to decompose and unify computational structures over a variety of processing systems. The unification arguments provide a better understanding of the intricacies of the development of processing system decomposition. Since abstract algebraic techniques are used, the decomposition process is systematized which makes it conducive to the use of computers as tools for decomposition. A general algorithm using the lattice theoretic method is developed to examine the structures and therefore decomposition properties of integer and polynomial rings. Two fundamental representations, the Sino-correspondence and the weighted radix representation, are derived for integer and polynomial structures and are shown to be a natural result of the decomposition process. They are used in developing systematic methods for decomposing discrete Fourier transforms and discrete linear systems. That is, fast Fourier transforms and partial fraction expansions of linear systems are a result of the natural representation derived using the lattice theoretic method. The discrete Fourier transform is derived from a lattice theoretic base demonstrating its independence of the continuous form and of the field over which it is computed. The same properties are demonstrated for error control codes based on polynomials. Partial fraction expansions are shown to be independent of the concept of a derivative for repeated roots and the field used to implement them

    Real time realization concepts of large adaptive filters

    Get PDF

    Effects of Fixed Point FFT Implementation of Wireless LAN

    Get PDF
    With the rapid growth of digital wireless communication in recent years, the need for high speed mobile data transmission has increased. New modulation techniques are being implemented to keep with the desire more communication capacity. Processing power has increased to a point where orthogonal frequency division multiplexing (OFDM) has become feasible and economical. Since many wireless communication systems being developed use OFDM, it is a worthwhile research topic. Some examples of applications using OFDM include Digital subscriber line (DSL), Digital Audio Broadcasting (DAB), High definition television (HDTV) broadcasting, IEEE 802.11 (wireless networking standard).OFDM is a strong candidate and has been suggested or standardized in high speed communication systems. This thesis analyzes the factor that affects the OFDM performance. The performance of OFDM was assessed by using computer simulations performed using Matlab.it was simulated under Additive white Gaussian noise (AWGN) channel conditions for different modulation schemes like binary phase shift keying (BPSK), Quadrature phase shift keying (QPSK), 16-Quadrature amplitude modulation (16-QAM), 64-Quadrature amplitude modulation (64-QAM) which are used in wireless LAN for achieving high data rates. One key component in OFDM based systems is inverse fast Fourier transform/fast Fourier transform (IFFT/FFT) computation, which performs the efficient modulation/demodulation. This block consumes large resources in terms of computational power.this thesis analyzes, different IFFT/FFT implementation on performance of OFDM communication system. Here 64-point IFFT/ FFT is used. FFT is a complex function whose computational accuracy, hardware size and processing speed depend on the type of arithmetic format used to implement it. Due to non-linearity of FFT its computational accuracy is not easy to calculate theoretically. The simulation carried out here, measure the effects of fixed point FFT on the performance of OFDM. Comparison has been made between bit error rate of OFDM using fixed point IFFT/FFT and a floating point IFFT/FFT. Simulation tests were made for different integer part lengths, fractional part lengths by limiting the input word lengths to 16 bits and found the suitable combination of integer part lengths and fractional part lengths which can achieve the best bit error rate (BER) performance with respect to floating point performance. Extensive computer simulations show that fixed point computation provides very near result as floating point if the delay parameter is suitably selected

    A new recursive high-resolution parametric method for power spectral density estimation

    Get PDF
    Thesis (M.Eng.Sc.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 199

    Theory and realization of novel algorithms for random sampling in digital signal processing

    Get PDF
    Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N(^2). The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement Random sampling is a technique which overcomes the alias problem in regular sampling. The randomization, however, destroys the symmetry property of the transform kernel of the discrete Fourier transform. Hence, when transforming a randomly sampled sequence to its frequency spectrum, the Fast Fourier transform cannot be applied and the computational complexity is N"^. The objectives of this research project are (1) To devise sampling methods for random sampling such that computation may be reduced while the anti-alias property of random sampling is maintained : Two methods of inserting limited regularities into the randomized sampling grids are proposed. They are parallel additive random sampling and hybrid additive random sampling, both of which can save at least 75% , of the multiplications required. The algorithms also lend themselves to the implementation by a multiprocessor system, which will further enhance the speed of the evaluation. (2) To study the auto-correlation sequence of a randomly sampled sequence as an alternative means to confirm its anti-alias property : The anti-alias property of the two proposed methods can be confirmed by using convolution in the frequency domain. However, the same conclusion is also reached by analysing in the spatial domain the auto-correlation of such sample sequences. A technique to evaluate the auto-correlation sequence of a randomly sampled sequence with a regular step size is proposed. The technique may also serve as an algorithm to convert a randomly sampled sequence to a regularly spaced sequence having a desired Nyquist frequency. (3) To provide a rapid spectral estimation using a coarse kernel : The approximate method proposed by Mason in 1980, which trades the accuracy for the speed of the computation, is introduced for making random sampling more attractive. (4) To suggest possible applications for random and pseudo-random sampling : To fully exploit its advantages, random sampling has been adopted in measurement instruments where computing a spectrum is either minimal or not required. Such applications in instrumentation are easily found in the literature. In this thesis, two applications in digital signal processing are introduced. (5) To suggest an inverse transformation for random sampling so as to complete a two-way process and to broaden its scope of application. Apart from the above, a case study of realizing in a transputer network the prime factor algorithm with regular sampling is given in Chapter 2 and a rough estimation of the signal-to-noise ratio for a spectrum obtained from random sampling is found in Chapter 3. Although random sampling is alias-free, problems in computational complexity and noise prevent it from being adopted widely in engineering applications. In the conclusions, the criteria for adopting random sampling are put forward and the directions for its development are discussed

    Discrete Harmonic Analysis. Representations, Number Theory, Expanders and the Fourier Transform

    Get PDF
    This self-contained book introduces readers to discrete harmonic analysis with an emphasis on the Discrete Fourier Transform and the Fast Fourier Transform on finite groups and finite fields, as well as their noncommutative versions. It also features applications to number theory, graph theory, and representation theory of finite groups. Beginning with elementary material on algebra and number theory, the book then delves into advanced topics from the frontiers of current research, including spectral analysis of the DFT, spectral graph theory and expanders, representation theory of finite groups and multiplicity-free triples, Tao's uncertainty principle for cyclic groups, harmonic analysis on GL(2,Fq), and applications of the Heisenberg group to DFT and FFT. With numerous examples, figures, and over 160 exercises to aid understanding, this book will be a valuable reference for graduate students and researchers in mathematics, engineering, and computer science

    Models and analysis of vocal emissions for biomedical applications

    Get PDF
    This book of Proceedings collects the papers presented at the 3rd International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications, MAVEBA 2003, held 10-12 December 2003, Firenze, Italy. The workshop is organised every two years, and aims to stimulate contacts between specialists active in research and industrial developments, in the area of voice analysis for biomedical applications. The scope of the Workshop includes all aspects of voice modelling and analysis, ranging from fundamental research to all kinds of biomedical applications and related established and advanced technologies

    Malfunction of process instruments and its detection using a process control computer

    Get PDF
    From an initial concern with investigation of ways in which the process control computer could learn the project was narrowed down to instrument malfunction detection. Preliminary surveys in industry were made and from there general ideas of modes of failure of some instruments were obtained. A wider survey of instruments in different environmental conditions followed. Failure information and reliability data on about 9,500 instruments representing a total of about 4,500 instrument years’ operating time were obtained. [Continues.
    corecore