247 research outputs found

    Using multiple visual tandem streams in audio-visual speech recognition

    Get PDF
    The method which is called the "tandem approach" in speech recognition has been shown to increase performance by using classifier posterior probabilities as observations in a hidden Markov model. We study the effect of using visual tandem features in audio-visual speech recognition using a novel setup which uses multiple classifiers to obtain multiple visual tandem features. We adopt the approach of multi-stream hidden Markov models where visual tandem features from two different classifiers are considered as additional streams in the model. It is shown in our experiments that using multiple visual tandem features improve the recognition accuracy in various noise conditions. In addition, in order to handle asynchrony between audio and visual observations, we employ coupled hidden Markov models and obtain improved performance as compared to the synchronous model

    Network Training for Continuous Speech Recognition

    Get PDF
    Spoken language processing is one of the oldest and most natural modes of information exchange between humans beings. For centuries, people have tried to develop machines that can understand and produce speech the way humans do so naturally. The biggest problem in our inability to model speech with computer programs and mathematics results from the fact that language is instinctive, whereas, the vocabulary and dialect used in communication are learned. Human beings are genetically equipped with the ability to learn languages, and culture imprints the vocabulary and dialect on each member of society. This thesis examines the role of pattern classification in the recognition of human speech, i.e., machine learning techniques that are currently being applied to the spoken language processing problem. The primary objective of this thesis is to create a network training paradigm that allows for direct training of multi-path models and alleviates the need for complicated systems and training recipes. A traditional trainer uses an expectation maximization (EM)based supervised training framework to estimate the parameters of a spoken language processing system. EM-based parameter estimation for speech recognition is performed using several complicated stages of iterative reestimation. These stages typically are prone to human error. The network training paradigm reduces the complexity of the training process while retaining the robustness of the EM-based supervised training framework. The hypothesis of this thesis is that the network training paradigm can achieve comparable recognition performance to a traditional trainer while alleviating the need for complicated systems and training recipes for spoken language processing systems

    Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

    Full text link
    Object manipulation actions represent an important share of the Activities of Daily Living (ADLs). In this work, we study how to enable service robots to use human multi-modal data to understand object manipulation actions, and how they can recognize such actions when humans perform them during human-robot collaboration tasks. The multi-modal data in this study consists of videos, hand motion data, applied forces as represented by the pressure patterns on the hand, and measurements of the bending of the fingers, collected as human subjects performed manipulation actions. We investigate two different approaches. In the first one, we show that multi-modal signal (motion, finger bending and hand pressure) generated by the action can be decomposed into a set of primitives that can be seen as its building blocks. These primitives are used to define 24 multi-modal primitive features. The primitive features can in turn be used as an abstract representation of the multi-modal signal and employed for action recognition. In the latter approach, the visual features are extracted from the data using a pre-trained image classification deep convolutional neural network. The visual features are subsequently used to train the classifier. We also investigate whether adding data from other modalities produces a statistically significant improvement in the classifier performance. We show that both approaches produce a comparable performance. This implies that image-based methods can successfully recognize human actions during human-robot collaboration. On the other hand, in order to provide training data for the robot so it can learn how to perform object manipulation actions, multi-modal data provides a better alternative

    Effects of Transcription Errors on Supervised Learning in Speech Recognition

    Get PDF
    Supervised learning using Hidden Markov Models has been used to train acoustic models for automatic speech recognition for several years. Typically clean transcriptions form the basis for this training regimen. However, results have shown that using sources of readily available transcriptions, which can be erroneous at times (e.g., closed captions) do not degrade the performance significantly. This work analyzes the effects of mislabeled data on recognition accuracy. For this purpose, the training is performed using manually corrupted training data and the results are observed on three different databases: TIDigits, Alphadigits and SwitchBoard. For Alphadigits, with 16% of data mislabeled, the performance of the system degrades by 12% relative to the baseline results. For a complex task like SWITCHBOARD, at 16% mislabeled training data, the performance of the system degrades by 8.5% relative to the baseline results. The training process is more robust to mislabeled data because the Gaussian mixtures that are used to model the underlying distribution tend to cluster around the majority of the correct data. The outliers (incorrect data) do not contribute significantly to the reestimation process

    Graphical Models for Multi-dialect Arabic Isolated Words Recognition

    Get PDF
    AbstractThis paper presents the use of multiple hybrid systems for the recognition of isolated words from a large multi-dialect Arabic vocabulary. Such as the Hidden Markov models (HMM), Dynamic Bayesian networks (DBN) lack a discriminatory ability especially on speech recognition even if their progress is huge. Multi-Layer perceptrons (MLP) was applied in literature as an estimator of emission probabilities in HMM and proves it effectiveness. In order to ameliorate the results of recognition systems, we apply Support Vectors Machine (SVM) as an estimator of posterior probabilities since they are characterized by a high predictive power and discrimination. Moreover, they are based on a structural risk minimization (SRM) where the aim is to set up a classifier that minimizes a bound on the expected risk, rather than the empirical risk. In this work we have done a comparative study between three hybrid systems MLP/HMM, SVM/HMM and SVM/DBN and the standards models of HMM and DBN. In this paper, we describe the use of the hybrid model SVM/DBN for multi-dialect Arabic isolated words recognition. So, by using 67,132 speech files of Arabic isolated words, this work arises a comparative study of our acknowledgment system of it as the following: the use of especially the HMM standards leads to a recognition rate of 74.18%.as the average rate of 8 domains for everyone of the 4 dialects. Also, with the hybrid systems MLP/HMM and SVM/HMM we succeed in achieving the value of 77.74%.and 7806% respectively. Moreover, our proposed system SVM/DBN realizes the best performances, whereby, we achieve 87.67% as a recognition rate more than 83.01% obtained by GMM/DBN
    corecore