22 research outputs found
Text-independent speaker recognition
This research presents new text-independent speaker recognition system with multivariate tools such as Principal Component Analysis (PCA) and Independent Component Analysis (ICA) embedded into the recognition system after the feature extraction step. The proposed approach evaluates the performance of such a recognition system when trained and used in clean and noisy environments. Additive white Gaussian noise and convolutive noise are added. Experiments were carried out to investigate the robust ability of PCA and ICA using the designed approach. The application of ICA improved the performance of the speaker recognition model when compared to PCA. Experimental results show that use of ICA enabled extraction of higher order statistics thereby capturing speaker dependent statistical cues in a text-independent recognition system. The results show that ICA has a better de-correlation and dimension reduction property than PCA. To simulate a multi environment system, we trained our model such that every time a new speech signal was read, it was contaminated with different types of noises and stored in the database. Results also show that ICA outperforms PCA under adverse environments. This is verified by computing recognition accuracy rates obtained when the designed system was tested for different train and test SNR conditions with additive white Gaussian noise and test delay conditions with echo effect
Using dysphonic voice to characterize speaker's biometry
Phonation distortion leaves relevant marks in a speaker's biometric profile. Dysphonic voice production may be used for biometrical speaker characterization. In the present paper phonation features derived from the glottal source (GS) parameterization, after vocal tract inversion, is proposed for dysphonic voice characterization in Speaker Verification tasks. The glottal source derived parameters are matched in a forensic evaluation framework defining a distance-based metric specification. The phonation segments used in the study are derived from fillers, long vowels, and other phonation segments produced in spontaneous telephone conversations. Phonated segments from a telephonic database of 100 male Spanish native speakers are combined in a 10-fold cross-validation task to produce the set of quality measurements outlined in the paper. Shimmer, mucosal wave correlate, vocal fold cover biomechanical parameter unbalance and a subset of the GS cepstral profile produce accuracy rates as high as 99.57 for a wide threshold interval (62.08-75.04%). An Equal Error Rate of 0.64 % can be granted. The proposed metric framework is shown to behave more fairly than classical likelihood ratios in supporting the hypothesis of the defense vs that of the prosecution, thus ofering a more reliable evaluation scoring. Possible applications are Speaker Verification and Dysphonic Voice Grading
Convolutive Blind Source Separation Methods
In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks
Blind dereverberation of speech from moving and stationary speakers using sequential Monte Carlo methods
Speech signals radiated in confined spaces are subject to reverberation due to reflections
of surrounding walls and obstacles. Reverberation leads to severe degradation
of speech intelligibility and can be prohibitive for applications where speech is digitally
recorded, such as audio conferencing or hearing aids. Dereverberation of speech
is therefore an important field in speech enhancement.
Driven by consumer demand, blind speech dereverberation has become a popular
field in the research community and has led to many interesting approaches in the literature.
However, most existing methods are dictated by their underlying models and
hence suffer from assumptions that constrain the approaches to specific subproblems
of blind speech dereverberation. For example, many approaches limit the dereverberation
to voiced speech sounds, leading to poor results for unvoiced speech. Few
approaches tackle single-sensor blind speech dereverberation, and only a very limited
subset allows for dereverberation of speech from moving speakers.
Therefore, the aim of this dissertation is the development of a flexible and extendible
framework for blind speech dereverberation accommodating different speech
sound types, single- or multiple sensor as well as stationary and moving speakers.
Bayesian methods benefit from â rather than being dictated by â appropriate model
choices. Therefore, the problem of blind speech dereverberation is considered from
a Bayesian perspective in this thesis. A generic sequential Monte Carlo approach
accommodating a multitude of models for the speech production mechanism and
room transfer function is consequently derived. In this approach both the anechoic
source signal and reverberant channel are estimated using their optimal estimators by
means of Rao-Blackwellisation of the state-space of unknown variables. The remaining
model parameters are estimated using sequential importance resampling.
The proposed approach is implemented for two different speech production models
for stationary speakers, demonstrating substantial reduction in reverberation for
both unvoiced and voiced speech sounds. Furthermore, the channel model is extended
to facilitate blind dereverberation of speech from moving speakers. Due to the
structure of measurement model, single- as well as multi-microphone processing is facilitated,
accommodating physically constrained scenarios where only a single sensor
can be used as well as allowing for the exploitation of spatial diversity in scenarios
where the physical size of microphone arrays is of no concern.
This dissertation is concluded with a survey of possible directions for future research,
including the use of switching Markov source models, joint target tracking
and enhancement, as well as an extension to subband processing for improved computational
efficiency
Recommended from our members
On-device mobile speech recognition
Despite many years of research, Speech Recognition remains an active area of research in Artificial Intelligence. Currently, the most common commercial application of this technology on mobile devices uses a wireless client â server approach to meet the computational and memory demands of the speech recognition process. Unfortunately, such an approach is unlikely to remain viable when fully applied over the approximately 7.22 Billion mobile phones currently in circulation. In this thesis we present an On â Device Speech recognition system. Such a system has the potential to completely eliminate the wireless client-server bottleneck. For the Voice Activity Detection part of this work, this thesis presents two novel algorithms used to detect speech activity within an audio signal. The first algorithm is based on the Log Linear Predictive Cepstral Coefficients Residual signal. These LLPCCRS feature vectors were then classified into voice signal and non-voice signal segments using a modified K-means clustering algorithm. This VAD algorithm is shown to provide a better performance as compared to a conventional energy frame analysis based approach. The second algorithm developed is based on the Linear Predictive Cepstral Coefficients. This algorithm uses the frames within the speech signal with the minimum and maximum standard deviation, as candidates for a linear cross correlation against the rest of the frames within the audio signal. The cross correlated frames are then classified using the same modified K-means clustering algorithm. The resulting output provides a cluster for Speech frames and another cluster for Nonâspeech frames. This novel application of the linear cross correlation technique to linear predictive cepstral coefficients feature vectors provides a fast computation method for use on the mobile platform; as shown by the results presented in this thesis. The Speech recognition part of this thesis presents two novel Neural Network approaches to mobile Speech recognition. Firstly, a recurrent neural networks architecture is developed to accommodate the output of the VAD stage. Specifically, an Echo State Network (ESN) is used for phoneme level recognition. The drawbacks and advantages of this method are explained further within the thesis. Secondly, a dynamic Multi-Layer Perceptron approach is developed. This builds on the drawbacks of the ESN and provides a dynamic way of handling speech signal length variabilities within its architecture. This novel Dynamic Multi-Layer Perceptron uses both the Linear Predictive Cepstral Coefficients (LPC) and the Mel Frequency Cepstral Coefficients (MFCC) as input features. A speaker dependent approach is presented using the Centre for spoken Language and Understanding (CSLU) database. The results show a very distinct behaviour from conventional speech recognition approaches because the LPC shows performance figures very close to the MFCC. A speaker independent system, using the standard TIMIT dataset, is then implemented on the dynamic MLP for further confirmation of this. In this mode of operation the MFCC outperforms the LPC. Finally, all the results, with emphasis on the computation time of both these novel neural network approaches are compared directly to a conventional hidden Markov model on the CSLU and TIMIT standard datasets