167 research outputs found
Wavelet methods in speech recognition
In this thesis, novel wavelet techniques are developed to improve parametrization of
speech signals prior to classification. It is shown that non-linear operations carried out
in the wavelet domain improve the performance of a speech classifier and consistently
outperform classical Fourier methods. This is because of the localised nature of the
wavelet, which captures correspondingly well-localised time-frequency features
within the speech signal. Furthermore, by taking advantage of the approximation
ability of wavelets, efficient representation of the non-stationarity inherent in speech
can be achieved in a relatively small number of expansion coefficients. This is an
attractive option when faced with the so-called 'Curse of Dimensionality' problem of
multivariate classifiers such as Linear Discriminant Analysis (LDA) or Artificial
Neural Networks (ANNs). Conventional time-frequency analysis methods such as the
Discrete Fourier Transform either miss irregular signal structures and transients due to
spectral smearing or require a large number of coefficients to represent such
characteristics efficiently. Wavelet theory offers an alternative insight in the
representation of these types of signals.
As an extension to the standard wavelet transform, adaptive libraries of wavelet and
cosine packets are introduced which increase the flexibility of the transform. This
approach is observed to be yet more suitable for the highly variable nature of speech
signals in that it results in a time-frequency sampled grid that is well adapted to
irregularities and transients. They result in a corresponding reduction in the
misclassification rate of the recognition system. However, this is necessarily at the
expense of added computing time.
Finally, a framework based on adaptive time-frequency libraries is developed which
invokes the final classifier to choose the nature of the resolution for a given
classification problem. The classifier then performs dimensionaIity reduction on the
transformed signal by choosing the top few features based on their discriminant power. This approach is compared and contrasted to an existing discriminant wavelet
feature extractor.
The overall conclusions of the thesis are that wavelets and their relatives are capable
of extracting useful features for speech classification problems. The use of adaptive
wavelet transforms provides the flexibility within which powerful feature extractors
can be designed for these types of application
Comparison of CELP speech coder with a wavelet method
This thesis compares the speech quality of Code Excited Linear Predictor (CELP, Federal Standard 1016) speech coder with a new wavelet method to compress speech. The performances of both are compared by performing subjective listening tests. The test signals used are clean signals (i.e. with no background noise), speech signals with room noise and speech signals with artificial noise added. Results indicate that for clean signals and signals with predominantly voiced components the CELP standard performs better than the wavelet method but for signals with room noise the wavelet method performs much better than the CELP. For signals with artificial noise added, the results are mixed depending on the level of artificial noise added with CELP performing better for low level noise added signals and the wavelet method performing better for higher noise levels
Novel Pitch Detection Algorithm With Application to Speech Coding
This thesis introduces a novel method for accurate pitch detection and speech segmentation, named Multi-feature, Autocorrelation (ACR) and Wavelet Technique (MAWT). MAWT uses feature extraction, and ACR applied on Linear Predictive Coding (LPC) residuals, with a wavelet-based refinement step. MAWT opens the way for a unique approach to modeling: although speech is divided into segments, the success of voicing decisions is not crucial. Experiments demonstrate the superiority of MAWT in pitch period detection accuracy over existing methods, and illustrate its advantages for speech segmentation. These advantages are more pronounced for gain-varying and transitional speech, and under noisy conditions
Wavelet-based techniques for speech recognition
In this thesis, new wavelet-based techniques have been developed for the
extraction of features from speech signals for the purpose of automatic speech
recognition (ASR). One of the advantages of the wavelet transform over the short
time Fourier transform (STFT) is its capability to process non-stationary signals.
Since speech signals are not strictly stationary the wavelet transform is a better
choice for time-frequency transformation of these signals. In addition it has
compactly supported basis functions, thereby reducing the amount of
computation as opposed to STFT where an overlapping window is needed. [Continues.
Interactive speech-driven facial animation
One of the fastest developing areas in the entertainment industry is digital animation. Television programmes and movies frequently use 3D animations to enhance or replace actors and scenery. With the increase in computing power, research is also being done to apply these animations in an interactive manner. Two of the biggest obstacles to the success of these undertakings are control (manipulating the models) and realism. This text describes many of the ways to improve control and realism aspects, in such a way that interactive animation becomes possible. Specifically, lip-synchronisation (driven by human speech), and various modeling and rendering techniques are discussed. A prototype that shows that interactive animation is feasible, is also described.Mr. A. Hardy Prof. S. von Solm
Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based Features Extracted in Different Acoustic Conditions
ABSTRACT: In the last years, there has a great progress in automatic speech recognition. The challenge now it is not only recognize the semantic content in the speech but also the called "paralinguistic" aspects of the speech, including the emotions, and the personality of the speaker. This research work aims in the development of a methodology for the automatic emotion recognition from speech signals in non-controlled noise conditions. For that purpose, different sets of acoustic, non-linear, and wavelet based features are used to characterize emotions in different databases created for such purpose
- …