341 research outputs found

    Improving state-of-theart continuous speech recognition systems using the N-best paradigm with neural networks

    Get PDF
    In an effort to advance the state of the art in continuous speech recognition employing hidden Markov models (HMM), Segmental Neural Nets (SNN) were introduced recently to ameliorate the wellknown limitations of HMMs, namely, the conditional-independence limitation and the relative difficulty with which HMMs can handle segmental features. We describe a hybrid SNN/I-IMM system that combines the speed and performance of our HMM system with the segmental modeling capabilities of SNNs. The integration of the two acoustic modeling techniques is achieved successfully via the N-best rescoring paradigm. The N-best lists are used not only for recognition, but also during training. This discriminative training using N-best is demonstrated to improve performance. When tested on the DARPA Resource Management speaker-independent corpus, the hybrid SNN/HMM system decreases the error by about 20% compared to the state-of-the-art HMM system

    A new model-discriminant training algorithm for hybrid NN-HMM systems

    Get PDF
    This paper describes a hybrid system for continuous speech recognition consisting of a neural network (NN) and a hidden Markov model (HMM). The system is based on a multilayer perceptron, which approximates the a-posteriori probability of a sequence of states, derived from semi-continuous hidden Markov models. The classification is based on a total score for each hybrid model, attained from a Viterbi search on the state probabilities. Due to the unintended discrimination between the states in each model, a new training algorithm for the hybrid neural networks is presented. The utilized error function approximates the misclassification rate of the hybrid system. The discriminance between the correct and the incorrect models is optimized during the training by the "Generalized Probabilistic Descent Algorithm\u27;, resulting in a minimum classification error. No explicit target values for the neural net output nodes are used, as in the usual backpropagation algorithm with a quadratic error function. In basic experiments up to 56% recognition rate were achieved on a vowel classification task and up to 69 % on a consonant cluster classification task

    Automatic speech recognition: a comparative evaluation between neural networks and hidden markov models

    Get PDF
    In this work we do a comparative evaluation between Artificial Neural Networks (RNA's) and Continuous Hidden Markov Models (CDHMM), in the framework of the recognition of isolated words, under the constrain of using a small number of features extracted from each voice signal. In order to accomplish such comparison we used two models of neural networks: the Multilayer Perceptron (MLP) and a variant of the Radial Basis (RBF), and some HMM models. We evaluated the performance of all models using two different test set and observed that the neural models presented the best results in both cases. Seeking to improve the HMM performance we developed a hybrid system, HMM/MLP, that improved the results previously obtained with all HMMs, and even those obtained with the neural networks for the all previous HMM, and even the neural nets for the hardest test set case

    Online Handwriting Recognition using HMM

    Get PDF
    Basically handwriting recognition can be divided into two parts as Offline handwriting recognition and Online handwriting recognition. Highly accurate output with predefined constraints can be given by Online handwriting recognition system as it is related to size of vocabulary and writer dependency, printed writing style etc. Hidden markov model increases the success rate of online recognition system. Online handwriting recognition gives additional time information which is not present in Offline system. A Markov process is a random prediction process whose future behavior rely only on its present state, does not depend on the past state. Which means it should satisfy the Markov condition. A Hidden markov model (HMM) is a statistical markov model. In HMM model the system being modeled is assumed to be a markov process with hidden states. Hidden Markov models (HMMs) can be viewed as extensions of discrete-state Markov processes. Human-machine interaction can be drastically getting improved as On-line handwriting recognition technology contains that capability. As instead of using keyboard any person can write anything by hand with the help of digital pen or any similar equipment would be more natural. HMM build a effective mathematical models for characterizing the variance both in time and signal space presented in speech signal

    Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition

    Full text link

    Towards End-to-End Speech Recognition

    Get PDF
    Standard automatic speech recognition (ASR) systems follow a divide and conquer approach to convert speech into text. Alternately, the end goal is achieved by a combination of sub-tasks, namely, feature extraction, acoustic modeling and sequence decoding, which are optimized in an independent manner. More recently, in the machine learning community deep learning approaches have emerged which allow training of systems in an end-to-end manner. Such approaches have found success in the area of natural language processing and computer vision community, and have consequently peaked interest in the speech community. The present thesis builds on these recent advances to investigate approaches to develop speech recognition systems in end-to-end manner. In that respect, the thesis follows two main axes of research. The first axis of research focuses on joint learning of features and classifiers for acoustic modeling. The second axis of research focuses on joint modeling of the acoustic model and the decoder. Along the first axis of research, in the framework of hybrid hidden Markov model/artificial neural networks (HMM/ANN) based ASR, we develop a convolution neural networks (CNNs) based acoustic modeling approach that takes raw speech signal as input and estimates phone class conditional probabilities. Specifically, the CNN has several convolution layers (feature stage) followed by multilayer perceptron (classifier stage), which are jointly optimized during the training. Through ASR studies on multiple languages and extensive analysis of the approach, we show that the proposed approach, with minimal prior knowledge, is able to learn automatically the relevant features from the raw speech signal. This approach yields systems that have less number of parameters and achieves better performance, when compared to the conventional approach of cepstral feature extraction followed by classifier training. As the features are automatically learned from the signal, a natural question that arises is: are such systems robust to noise? Towards that we propose a robust CNN approach referred to as normalized CNN approach, which yields systems that are as robust as or better than the conventional ASR systems using cepstral features (with feature level normalizations). The second axis of research focuses on end-to-end sequence-to-sequence conversion. We first propose an end-to-end phoneme recognition system. In this system the relevant features, classifier and the decoder (based on conditional random fields) are jointly modeled during training. We demonstrate the viability of the approach on TIMIT phoneme recognition task. Building on top of that, we investigate a ``weakly supervised'' training that alleviates the necessity for frame level alignments. Finally, we extend the weakly supervised approach to propose a novel keyword spotting technique. In this technique, a CNN first process the input observation sequence to output word level scores, which are subsequently aggregated to detect or spot words. We demonstrate the potential of the approach through a comparative study on LibriSpeech with the standard approach of keyword word spotting based on lattice indexing using ASR system

    Tone classification of syllable -segmented Thai speech based on multilayer perceptron

    Get PDF
    Thai is a monosyllabic and tonal language. Thai makes use of tone to convey lexical information about the meaning of a syllable. Thai has five distinctive tones and each tone is well represented by a single F0 contour pattern. In general, a Thai syllable with a different tone has a different lexical meaning. Thus, to completely recognize a spoken Thai syllable, a speech recognition system has not only to recognize a base syllable but also to correctly identify a tone. Hence, tone classification of Thai speech is an essential part of a Thai speech recognition system.;In this study, a tone classification of syllable-segmented Thai speech which incorporates the effects of tonal coarticulation, stress and intonation was developed. Automatic syllable segmentation, which performs the segmentation on the training and test utterances into syllable units, was also developed. The acoustical features including fundamental frequency (F0), duration, and energy extracted from the processing syllable and neighboring syllables were used as the main discriminating features. A multilayer perceptron (MLP) trained by backpropagation method was employed to classify these features. The proposed system was evaluated on 920 test utterances spoken by five male and three female Thai speakers who also uttered the training speech. The proposed system achieved an average accuracy rate of 91.36%
    • …
    corecore