14 research outputs found
Discrete Wavelet Transforms
The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications
Seismic Correction in the Wavelet Domain
This thesis summarises novel approaches and methods in the wavelet domain
employed and published in the literature by the author for the correction and
processing of time-series data from recorded seismic events, obtained from strong
motion accelerographs. Historically, the research developed to first de-convolve the
instrument response from legacy analogue strong-motion instruments, of which there
are a large number. This was to make available better estimates of the acceleration
ground motion before the more problematic part of the research that of obtaining
ground velocities and displacements. The characteristics of legacy analogue strongmotion
instruments are unfortunately in most cases not available, making it difficult
to de-couple the instrument response. Essentially this is a system identification
problem presented and summarised therein with solutions that are transparent to this
lack of instrument data. This was followed by the more fundamental and problematic
part of the research that of recovering the velocity and displacement from the
recorded data. In all cases the instruments are tri-axial, i.e. translation only. This is a
limiting factor and leads to distortions manifest by dc shifts in the recorded data as a
consequence of the instrument pitching, rolling and yawing during seismic events.
These distortions are embedded in the translation acceleration time–series, their
contributions having been recorded by the same tri-axial sensors. In the literature this
is termed ‘baseline error’ and it effectively prevents meaningful integration to
velocity and displacement. Sophisticated methods do exist, which recover estimates of
velocity and displacement, but these require a good measure of expertise and do not
recover all the possible information from the recorded data. A novel, automated
wavelet transform method developed by the author and published in the earthquake
engineering literature is presented. This surmounts the problem of obtaining the
velocity and displacement and in addition recovers both a low-frequency pulse called
the ‘fling’, the displacement ‘fling-step’ and the form of the baseline error, both
inferred in the literature, but hitherto never recovered. Once the acceleration fling
pulse is recovered meaningful integration becomes a reality. However, the necessity
of developing novel algorithms in order to recover important information emphasises
the weakness of modern digital instruments in that they are all tri- rather than sextaxial
instruments
Channel estimation techniques for filter bank multicarrier based transceivers for next generation of wireless networks
A dissertation submitted to Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the degree of Master of Science in Engineering (Electrical and Information Engineering), August 2017The fourth generation (4G) of wireless communication system is designed based on the principles of cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) where the cyclic prefix (CP) is used to combat inter-symbol interference (ISI) and inter-carrier interference (ICI) in order to achieve higher data rates in comparison to the previous generations of wireless networks. Various filter bank multicarrier systems have been considered as potential waveforms for the fast emerging next generation (xG) of wireless networks (especially the fifth generation (5G) networks). Some examples of the considered waveforms are orthogonal frequency division multiplexing with offset quadrature amplitude modulation based filter bank, universal filtered multicarrier (UFMC), bi-orthogonal frequency division multiplexing (BFDM) and generalized frequency division multiplexing (GFDM). In perfect reconstruction (PR) or near perfect reconstruction (NPR) filter bank designs, these aforementioned FBMC waveforms adopt the use of well-designed prototype filters (which are used for designing the synthesis and analysis filter banks) so as to either replace or minimize the CP usage of the 4G networks in order to provide higher spectral efficiencies for the overall increment in data rates. The accurate designing of the FIR low-pass prototype filter in NPR filter banks results in minimal signal distortions thus, making the analysis filter bank a time-reversed version of the corresponding synthesis filter bank. However, in non-perfect reconstruction (Non-PR) the analysis filter bank is not directly a time-reversed version of the corresponding synthesis filter bank as the prototype filter impulse response for this system is formulated (in this dissertation) by the introduction of randomly generated errors. Hence, aliasing and amplitude distortions are more prominent for Non-PR.
Channel estimation (CE) is used to predict the behaviour of the frequency selective channel and is usually adopted to ensure excellent reconstruction of the transmitted symbols. These techniques can be broadly classified as pilot based, semi-blind and blind channel estimation schemes. In this dissertation, two linear pilot based CE techniques namely the least square (LS) and linear minimum mean square error (LMMSE), and three adaptive channel estimation schemes namely least mean square (LMS), normalized least mean square (NLMS) and recursive least square (RLS) are presented, analyzed and documented. These are implemented while exploiting the near orthogonality properties of offset quadrature amplitude modulation (OQAM) to mitigate the effects of interference for two filter bank waveforms (i.e. OFDM/OQAM and GFDM/OQAM) for the next generation of wireless networks assuming conditions of both NPR and Non-PR in slow and fast frequency selective Rayleigh fading channel. Results obtained from the computer simulations carried out showed that the channel estimation schemes performed better in an NPR filter bank system as compared with Non-PR filter banks. The low performance of Non-PR system is due to the amplitude distortion and aliasing introduced from the random errors generated in the system that is used to design its prototype filters. It can be concluded that RLS, NLMS, LMS, LMMSE and LS channel estimation schemes offered the best normalized mean square error (NMSE) and bit error rate (BER) performances (in decreasing order) for both waveforms assuming both NPR and Non-PR filter banks.
Keywords: Channel estimation, Filter bank, OFDM/OQAM, GFDM/OQAM, NPR, Non-PR, 5G, Frequency selective channel.CK201
The Telecommunications and Data Acquisition Report
This quarterly publication provides archival reports on developments in programs in space communications, radio navigation, radio science, and ground-based radio and radar astronomy. It reports on activities of the Deep Space Network (DSN) in planning, supporting research and technology, implementation, and operations. Also included are standardization activities at the Jet Propulsion Laboratory for space data and information systems
Perceptual models in speech quality assessment and coding
The ever-increasing demand for good communications/toll
quality speech has created a renewed interest into the
perceptual impact of rate compression. Two general areas are
investigated in this work, namely speech quality assessment
and speech coding.
In the field of speech quality assessment, a model is
developed which simulates the processing stages of the
peripheral auditory system. At the output of the model a
"running" auditory spectrum is obtained. This represents
the auditory (spectral) equivalent of any acoustic sound such
as speech. Auditory spectra from coded speech segments serve
as inputs to a second model. This model simulates the
information centre in the brain which performs the speech
quality assessment. [Continues.
Efficient compression of motion compensated residuals
EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Wavelet Theory
The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior
Speech coding at medium bit rates using analysis by synthesis techniques
Speech coding at medium bit rates using analysis by synthesis technique
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity