541 research outputs found
Lip segmentation using adaptive color space training
In audio-visual speech recognition (AVSR), it is beneficial
to use lip boundary information in addition to texture-dependent
features. In this paper, we propose an automatic lip segmentation
method that can be used in AVSR systems. The algorithm
consists of the following steps: face detection, lip corners extraction,
adaptive color space training for lip and non-lip regions
using Gaussian mixture models (GMMs), and curve evolution
using level-set formulation based on region and image
gradients fields. Region-based fields are obtained using adapted
GMM likelihoods. We have tested the proposed algorithm on a
database (SU-TAV) of 100 facial images and obtained objective
performance results by comparing automatic lip segmentations
with hand-marked ground truth segmentations. Experimental
results are promising and much work has to be done to improve
the robustness of the proposed method
A Nonlinear Mixture Autoregressive Model For Speaker Verification
In this work, we apply a nonlinear mixture autoregressive (MixAR) model to supplant the Gaussian mixture model for speaker verification. MixAR is a statistical model that is a probabilistically weighted combination of components, each of which is an autoregressive filter in addition to a mean. The probabilistic mixing and the datadependent weights are responsible for the nonlinear nature of the model. Our experiments with synthetic as well as real speech data from standard speech corpora show that MixAR model outperforms GMM, especially under unseen noisy conditions. Moreover, MixAR did not require delta features and used 2.5x fewer parameters to achieve comparable or better performance as that of GMM using static as well as delta features. Also, MixAR suffered less from overitting issues than GMM when training data was sparse. However, MixAR performance deteriorated more quickly than that of GMM when evaluation data duration was reduced. This could pose limitations on the required minimum amount of evaluation data when using MixAR model for speaker verification
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing
Computer Graphics and Video Features for Speaker Recognition
Tato práce popisuje netradiÄŤnĂ metodu rozpoznávánĂ Ĺ™eÄŤnĂka pomocĂ pĹ™ĂznakĹŻ a alogoritmĹŻ pouĹľĂvanĂ˝ch pĹ™evážnÄ› v poÄŤĂtaÄŤovĂ©m vidÄ›nĂ. V Ăşvodu jsou shrnuty potĹ™ebnĂ© teoretickĂ© znalosti z oblasti poÄŤĂtaÄŤovĂ©ho rozpoznávánĂ. Jako aplikace grafickĂ˝ch pĹ™ĂznakĹŻ v rozpoznávánĂ Ĺ™eÄŤnĂka jsou detailnÄ›ji popsány jiĹľ známĂ© BBF pĹ™Ăznaky. Tyto jsou vyhodnoceny nad standardnĂmi Ĺ™eÄŤovĂ˝mi databázemi TIMIT a NIST SRE 2010. ExperimentálnĂ vĂ˝sledky jsou shrnuty a porovnány se standardnĂmi metodami. V závÄ›ru jsou jsou navrĹľeny moĹľnĂ© smÄ›ry budoucĂ práce.We describe a non-traditional method for speaker recognition that uses features and algorithms used mainly for computer vision. Important theoretical knowledge of computer recognition is summarized first. The Boosted Binary Features are described and explored as an already proposed method, that has roots in computer vision. This method is evaluated on standard speaker recognition databases TIMIT and NIST SRE 2010. Experimental results are given and compared to standard methods. Possible directions for future work are proposed at the end.
Dysarthric Speech Recognition and Offline Handwriting Recognition using Deep Neural Networks
Millions of people around the world are diagnosed with neurological disorders like Parkinson’s, Cerebral Palsy or Amyotrophic Lateral Sclerosis. Due to the neurological damage as the disease progresses, the person suffering from the disease loses control of muscles, along with speech deterioration. Speech deterioration is due to neuro motor condition that limits manipulation of the articulators of the vocal tract, the condition collectively called as dysarthria. Even though dysarthric speech is grammatically and syntactically correct, it is difficult for humans to understand and for Automatic Speech Recognition (ASR) systems to decipher. With the emergence of deep learning, speech recognition systems have improved a lot compared to traditional speech recognition systems, which use sophisticated preprocessing techniques to extract speech features.
In this digital era there are still many documents that are handwritten many of which need to be digitized. Offline handwriting recognition involves recognizing handwritten characters from images of handwritten text (i.e. scanned documents). This is an interesting task as it involves sequence learning with computer vision. The task is more difficult than Optical Character Recognition (OCR), because handwritten letters can be written in virtually infinite different styles. This thesis proposes exploiting deep learning techniques like Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) for offline handwriting recognition. For speech recognition, we compare traditional methods for speech recognition with recent deep learning methods. Also, we apply speaker adaptation methods both at feature level and at parameter level to improve recognition of dysarthric speech
- …