103 research outputs found
A Novel Approach for Adaptive Signal Processing
Adaptive linear predictors have been used extensively in practice in a wide variety of forms. In the main, their theoretical development is based upon the assumption of stationarity of the signals involved, particularly with respect to the second order statistics. On this basis, the well-known normal equations can be formulated. If high- order statistical stationarity is assumed, then the equivalent normal equations involve high-order signal moments. In either case, the cross moments (second or higher) are needed. This renders the adaptive prediction procedure non-blind. A novel procedure for blind adaptive prediction has been proposed and considerable implementation has been made in our contributions in the past year. The approach is based upon a suitable interpretation of blind equalization methods that satisfy the constant modulus property and offers significant deviations from the standard prediction methods. These blind adaptive algorithms are derived by formulating Lagrange equivalents from mechanisms of constrained optimization. In this report, other new update algorithms are derived from the fundamental concepts of advanced system identification to carry out the proposed blind adaptive prediction. The results of the work can be extended to a number of control-related problems, such as disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. The applications implemented are speech processing, such as coding and synthesis. Simulations are included to verify the novel modelling method
Channel compensation for speaker recognition systems
This thesis attempts to address the problem of how best to remedy different types of channel distortions on speech when that speech is to be used in automatic speaker recognition and verification systems. Automatic speaker recognition is when a person's voice is analysed by a machine and the person's identity is worked out by the comparison of speech features to a known set of speech features. Automatic speaker verification is when a person claims an identity and the machine determines if that claimed identity is correct or whether that person is an impostor. Channel distortion occurs whenever information is sent electronically through any type of channel whether that channel is a basic wired telephone channel or a wireless channel. The types of distortion that can corrupt the information include time-variant or time-invariant filtering of the information or the addition of 'thermal noise' to the information, both of these types of distortion can cause varying degrees of error in information being received and analysed. The experiments presented in this thesis investigate the effects of channel distortion on the average speaker recognition rates and testing the effectiveness of various channel compensation algorithms designed to mitigate the effects of channel distortion. The speaker recognition system was represented by a basic recognition algorithm consisting of: speech analysis, extraction of feature vectors in the form of the Mel-Cepstral Coefficients, and a classification part based on the minimum distance rule. Two types of channel distortion were investigated: • Convolutional (or lowpass filtering) effects • Addition of white Gaussian noise Three different methods of channel compensation were tested: • Cepstral Mean Subtraction (CMS) • RelAtive SpecTrAl (RASTA) Processing • Constant Modulus Algorithm (CMA) The results from the experiments showed that for both CMS and RASTA processing that filtering at low cutoff frequencies, (3 or 4 kHz), produced improvements in the average speaker recognition rates compared to speech with no compensation. The levels of improvement due to RASTA processing were higher than the levels achieved due to the CMS method. Neither the CMS or RASTA methods were able to improve accuracy of the speaker recognition system for cutoff frequencies of 5 kHz, 6 kHz or 7 kHz. In the case of noisy speech all methods analysed were able to compensate for high SNR of 40 dB and 30 dB and only RASTA processing was able to compensate and improve the average recognition rate for speech corrupted with a high level of noise (SNR of 20 dB and 10 dB)
Digital signal processing for coherent optical fibre communications
In this thesis investigations were performed into digital signal processing (DSP)
algorithms for coherent optical fibre transmission systems, which provide improved
performance with respect to conventional systems and algorithms. Firstly, an
overview of coherent detection and coherent transmission systems is given.
Experimental investigations were then performed into the performance of digital
backpropagation for mitigating fibre nonlinearities in a dual-polarization quadrature
phase shift keying (DP-QPSK) system over 7780 km and a dual-polarization 16-
level quadrature amplitude modulation (DP-QAM16) system over 1600 km. It is
noted that significant improvements in performance may be achieved for a nonlinear
step-size greater than one span. An approximately exponential relationship was
found between performance improvement in Q-factor and the number for required
complex multipliers.
DSP algorithms for polarization-switched quadrature phase shift keying (PS-QPSK)
are then investigated. A novel two-part equalisation algorithm is proposed which
provides singularity-free convergence and blind equalisation of PS-QPSK. This
algorithm is characterised and its application to wavelength division multiplexed
(WDM) transmission systems is discussed.
The thesis concludes with an experimental comparison between a PS-QPSK
transmission system and a conventional DP-QPSK system. For a 42.9 Gb/s WDM
system, the use of PS-QPSK enabled an increase of reach of more than 30%. The
resultant reach of 13,640 km was, at the time of publication, the longest transmission
distance reported for 40 Gb/s transmission over an uncompensated link with standard
fibre and optical amplification
High-speed optical fibre transmission using advanced modulation formats
The rapid growth in interactive bandwidth-hungry services demands ever higher
capacity at various stages of the optical network, leading to a potential capacity exhaust,
termed the capacity crunch. The main aim of the research work described in this thesis
was to help solve the potential capacity crunch by exploring techniques to increase the
data rate, spectral efficiency and reach of optical fibre systems. The focus was on the
use of advanced signal modulation formats, including optical time-division multiplexing
(OTDM), quadrature phase shift keying (QPSK), and 16-state quadrature amplitude
modulation (QAM16). QPSK and QAM16 modulations formats were studied in
combination with coherent detection and digital signal processing (DSP) for the
compensation of transmission impairments. In addition, return-to-zero (RZ) pulses were
explored to increase the tolerance towards nonlinearity for coherently detected signals,
and nonlinearity compensation (NLC) through the DSP.
Initially, to maximise the bit-rate, research was focused on the study of OTDM
transmission at 80Gbit/s with the aim to optimise the phase difference between the
adjacent OTDM channels. A new technique to achieve bit-wise phase control using a
phase-stabilised fibre interferometer was proposed. Faced with a limited fibre capacity,
the need to maximise the spectral efficiency became paramount, and thus the need to
use phase, amplitude and polarisation domains for signal transmission. In combination
with coherent detection the research focused on the performance of optical fibre systems
using QPSK and QAM16 modulation formats, including their generation, transmission
and detection in single-channel and WDM regimes. This included the study of the
impact of pulse shapes, and the mitigation of linear and nonlinear transmission
impairments with receiver-based DSP at bit-rates ranging from 42.7 to 224Gbit/s. The
technique demonstrated for bit-wise phase control for OTDM was successfully used to
demonstrate a new method for QAM16 signal generation. Longest transmission
distances (up to 10160km in 112Gbit/s QPSK, 4240km in 112Gbit/s QAM16, and
2000km in 224Gbit/s QAM16) have been achieved with the use of NLC and RZ pulses.
The efficiency of these two techniques is explored through a comprehensive set of
experiments in both single-channel and WDM transmission experiments. The results
can be used in the design of future optical transmission systems
Adaptive antenna array beamforming using a concatenation of recursive least square and least mean square algorithms
In recent years, adaptive or smart antennas have become a key component for various wireless applications, such as radar, sonar and cellular mobile communications including worldwide interoperability for microwave access (WiMAX). They lead to an increase in the detection range of radar and sonar systems, and the capacity of mobile radio communication systems. These antennas are used as spatial filters for receiving the desired signals coming from a specific direction or directions, while minimizing the reception of unwanted signals emanating from other directions.Because of its simplicity and robustness, the LMS algorithm has become one of the most popular adaptive signal processing techniques adopted in many applications, including antenna array beamforming. Over the last three decades, several improvements have been proposed to speed up the convergence of the LMS algorithm. These include the normalized-LMS (NLMS), variable-length LMS algorithm, transform domain algorithms, and more recently the constrained-stability LMS (CSLMS) algorithm and modified robust variable step size LMS (MRVSS) algorithm. Yet another approach for attempting to speed up the convergence of the LMS algorithm without having to sacrifice too much of its error floor performance, is through the use of a variable step size LMS (VSSLMS) algorithm. All the published VSSLMS algorithms make use of an initial large adaptation step size to speed up the convergence. Upon approaching the steady state, smaller step sizes are then introduced to decrease the level of adjustment, hence maintaining a lower error floor. This convergence improvement of the LMS algorithm increases its complexity from 2N in the case of LMS algorithm to 9N in the case of the MRVSS algorithm, where N is the number of array elements.An alternative to the LMS algorithm is the RLS algorithm. Although higher complexity is required for the RLS algorithm compared to the LMS algorithm, it can achieve faster convergence, thus, better performance compared to the LMS algorithm. There are also improvements that have been made to the RLS algorithm families to enhance tracking ability as well as stability. Examples are, the adaptive forgetting factor RLS algorithm (AFF-RLS), variable forgetting factor RLS (VFFRLS) and the extended recursive least squares (EX-KRLS) algorithm. The multiplication complexity of VFFRLS, AFF-RLS and EX-KRLS algorithms are 2.5N2 + 3N + 20 , 9N2 + 7N , and 15N3 + 7N2 + 2N + 4 respectively, while the RLS algorithm requires 2.5N2 + 3N .All the above well known algorithms require an accurate reference signal for their proper operation. In some cases, several additional operating parameters should be specified. For example, MRVSS needs twelve predefined parameters. As a result, its performance highly depends on the input signal.In this study, two adaptive beamforming algorithms have been proposed. They are called recursive least square - least mean square (RLMS) algorithm, and least mean square - least mean square (LLMS) algorithm. These algorithms have been proposed for meeting future beamforming requirements, such as very high convergence rate, robust to noise and flexible modes of operation. The RLMS algorithm makes use of two individual algorithm stages, based on the RLS and LMS algorithms, connected in tandem via an array image vector. On the other hand, the LLMS algorithm is a simpler version of the RLMS algorithm. It makes use of two LMS algorithm stages instead of the RLS – LMS combination as used in the RLMS algorithm.Unlike other adaptive beamforming algorithms, for both of these algorithms, the error signal of the second algorithm stage is fed back and combined with the error signal of the first algorithm stage to form an overall error signal for use update the tap weights of the first algorithm stage.Upon convergence, usually after few iterations, the proposed algorithms can be switched to the self-referencing mode. In this mode, the entire algorithm outputs are swapped, replacing their reference signals. In moving target applications, the array image vector, F, should also be updated to the new position. This scenario is also studied for both proposed algorithms. A simple and effective method for calculate the required array image vector is also proposed. Moreover, since the RLMS and the LLMS algorithms employ the array image vector in their operation, they can be used to generate fixed beams by pre-setting the values of the array image vector to the specified direction.The convergence of RLMS and LLMS algorithms is analyzed for two different operation modes; namely with external reference or self-referencing. Array image vector calculations, ranges of step sizes values for stable operation, fixed beam generation, and fixed-point arithmetic have also been studied in this thesis. All of these analyses have been confirmed by computer simulations for different signal conditions. Computer simulation results show that both proposed algorithms are superior in convergence performances to the algorithms, such as the CSLMS, MRVSS, LMS, VFFRLS and RLS algorithms, and are quite insensitive to variations in input SNR and the actual step size values used. Furthermore, RLMS and LLMS algorithms remain stable even when their reference signals are corrupted by additive white Gaussian noise (AWGN). In addition, they are robust when operating in the presence of Rayleigh fading. Finally, the fidelity of the signal at the output of the proposed algorithms beamformers is demonstrated by means of the resultant values of error vector magnitude (EVM), and scatter plots. It is also shown that, the implementation of an eight element uniform linear array using the proposed algorithms with a wordlength of nine bits is sufficient to achieve performance close to that provided by full precision
Glottal-synchronous speech processing
Glottal-synchronous speech processing is a field of speech science where the pseudoperiodicity
of voiced speech is exploited. Traditionally, speech processing involves segmenting
and processing short speech frames of predefined length; this may fail to exploit the inherent
periodic structure of voiced speech which glottal-synchronous speech frames have
the potential to harness. Glottal-synchronous frames are often derived from the glottal
closure instants (GCIs) and glottal opening instants (GOIs).
The SIGMA algorithm was developed for the detection of GCIs and GOIs from
the Electroglottograph signal with a measured accuracy of up to 99.59%. For GCI and
GOI detection from speech signals, the YAGA algorithm provides a measured accuracy
of up to 99.84%. Multichannel speech-based approaches are shown to be more robust to
reverberation than single-channel algorithms.
The GCIs are applied to real-world applications including speech dereverberation,
where SNR is improved by up to 5 dB, and to prosodic manipulation where the importance
of voicing detection in glottal-synchronous algorithms is demonstrated by subjective
testing. The GCIs are further exploited in a new area of data-driven speech modelling,
providing new insights into speech production and a set of tools to aid deployment into
real-world applications. The technique is shown to be applicable in areas of speech coding,
identification and artificial bandwidth extension of telephone speec
Recommended from our members
Strategies for Devising Automatic Signal Recognition Algorithms in a Shared Radio Environment
In an increasingly congested and complex radio environment interference is to be expected, which poses problems for Automatic Signal Recognition (ASR) systems.
This thesis explores strategies for improving ASR performance in the presence of interference. The thesis breaks the overall research question down into a number of subquestions and explores each of these in turn. A Phase-symmetric Cross Recurrence Plot is developed and used to show how a radio signal can be manipulated to separate information about the modulation from the information being carried. The Logarithmic Cyclic frequency Domain Profile is introduced to illustrate how a logarithmic representation can be used for analysing mixtures of signals with very different cyclic frequencies. After defining a canonical ASR system architecture, the concepts of an Ideal Feature and Interference Selectivity are introduced and applied to typical features used in ASR processing. Finally it is shown how these algorithmic developments can be combined in a Bayesian chain implementation that can accommodate a wide variety of feature extraction algorithms.
It is concluded that future ASR systems will require features that can handle a wide range of signal types with much higher levels of interference selectivity if they are to achieve acceptable performance in shared spectrum bands. Intelligent segmentation is shown to be a requirement for future ASR systems unless features can be developed that have near ideal performance
Mitigation of nonlinear impairments for advanced optical modulation formats
Optical fibre networks form the backbone of the global communication infrastructure but are currently experiencing an unprecedented level of stress due to more and more bandwidth-hungry applications. In an effort to address this and avoid a so-called capacity crunch, research groups around the world have focused their attention on more spectrally-efficient modulation formats, to increase available capacity at a competitive cost. However, the drive towards higher- order modulation formats leads to greater transmission impairments, reducing the maximum distance over which increased capacity can be provided. The thesis describes the research work carried out to investigate the achievable transmission distances when using higher order modulation formats together with digital backpropagation (DBP). DBP is a digital signal processing (DSP) algorithm, capable of compensating for deterministic nonlinear impairments by inverting the fibre channel. Single-channel and wavelength-division-multiplexed (WDM) transmission has been investigated in experiment and simulation for a variety of polarisation-division-multiplexed (PDM) modulation formats: binary-phase-shift-keying (PDM-BPSK), quadrature-phase-shift-keying (PDM-QPSK), 8-phase-shift-keying (PDM-8PSK), 8-quadrature amplitude modulation (PDM-8QAM), 16-quadrature amplitude modulation (PDM-16QAM) and polarisation-switched QPSK (PS-QPSK). Record transmission distances were achieved in WDM transmission experiments with PDM-BPSK, PS-QPSK and PDM-QPSK at 42.9Gbit/s as well as for PDM-8PSK and PDM-8QAM at 112Gbit/s, over the most common fibre type: standard single mode fibre (SSMF) and the most common amplification solution: erbium doped fibre amplifiers (EDFA). For the first time, nonlinear compensation has been compared experimentally for different modulation formats and a fixed-complexity DBP algorithm. Its use led to increased benefit for more spectrally efficient modulation formats. Computer simulations were used to explore the upper bounds of achievable performance improvement with DBP, using an algorithm with unconstrained complexity. Furthermore, DBP was investigated for varying symbol rates and channel spacings to investigate trade-offs with respect to the digital receiver bandwidth. It was shown that even though DBP is computationally expensive, it can achieve significant improvements in transmission reach and BER performance. The results presented in this thesis, can be applied to the design of future optical transmission systems
- …