8,483 research outputs found

    Confidence Scoring and Speaker Adaptation in Mobile Automatic Speech Recognition Applications

    Get PDF
    Generally, the user group of a language is remarkably diverse in terms of speaker-specific characteristics such as dialect and speaking style. Hence, quality of spoken content varies notably from one individual to another. This diversity causes problems for Automatic Speech Recognition systems. An Automatic Speech Recognition system should be able to assess the hypothesised results. This can be done by evaluating a confidence measure on the recognition results and comparing the resulting measure to a specified threshold. This threshold value, referred to as confidence score, informs how reliable a particular recognition result is for the given speech. A system should perform optimally irrespective of input speaker characteristics. However, most systems are inflexible and non-adaptive and thus, speaker adaptability can be improved. For achieving these purposes, a solid criterion is required to evaluate the quality of spoken content and the system should be made robust and adaptive towards new speakers as well. This thesis implements a confidence score using posterior probabilities to examine the quality of the output, based on the speech data and corpora provided by Devoca Oy. Furthermore, speaker adaptation algorithms: Maximum Likelihood Linear Regression and Maximum a Posteriori are applied on a GMM-HMM system and their results are compared. Experiments show that Maximum a Posteriori adaptation brings 2% to 25% improvement in word error rates of semi-continuous model and is recommended for use in the commercial product. The results of other methods are also reported. In addition, word graph is suggested as the method for obtaining posterior probabilities. Since it guarantees no such improvement in the results, the confidence score is proposed as an optional feature for the system

    Bayesian Speaker Adaptation Based on a New Hierarchical Probabilistic Model

    Get PDF
    In this paper, a new hierarchical Bayesian speaker adaptation method called HMAP is proposed that combines the advantages of three conventional algorithms, maximum a posteriori (MAP), maximum-likelihood linear regression (MLLR), and eigenvoice, resulting in excellent performance across a wide range of adaptation conditions. The new method efficiently utilizes intra-speaker and inter-speaker correlation information through modeling phone and speaker subspaces in a consistent hierarchical Bayesian way. The phone variations for a specific speaker are assumed to be located in a low-dimensional subspace. The phone coordinate, which is shared among different speakers, implicitly contains the intra-speaker correlation information. For a specific speaker, the phone variation, represented by speaker-dependent eigenphones, are concatenated into a supervector. The eigenphone supervector space is also a low dimensional speaker subspace, which contains inter-speaker correlation information. Using principal component analysis (PCA), a new hierarchical probabilistic model for the generation of the speech observations is obtained. Speaker adaptation based on the new hierarchical model is derived using the maximum a posteriori criterion in a top-down manner. Both batch adaptation and online adaptation schemes are proposed. With tuned parameters, the new method can handle varying amounts of adaptation data automatically and efficiently. Experimental results on a Mandarin Chinese continuous speech recognition task show good performance under all testing conditions

    Using out-of-language data to improve an under-resourced speech recognizer

    Get PDF
    Under-resourced speech recognizers may benefit from data in languages other than the target language. In this paper, we report how to boost the performance of an Afrikaans automatic speech recognition system by using already available Dutch data. We successfully exploit available multilingual resources through 1) posterior features, estimated by multilayer perceptrons (MLP) and 2) subspace Gaussian mixture models (SGMMs). Both the MLPs and the SGMMs can be trained on out-of-language data. We use three different acoustic modeling techniques, namely Tandem, Kullback-Leibler divergence based HMMs (KL-HMM) as well as SGMMs and show that the proposed multilingual systems yield 12% relative improvement compared to a conventional monolingual HMM/GMM system only trained on Afrikaans. We also show that KL-HMMs are extremely powerful for under-resourced languages: using only six minutes of Afrikaans data (in combination with out-of-language data), KL-HMM yields about 30% relative improvement compared to conventional maximum likelihood linear regression and maximum a posteriori based acoustic model adaptation

    Generalization of Extended Baum-Welch Parameter Estimation for Discriminative Training and Decoding

    Get PDF
    We demonstrate the generalizability of the Extended Baum-Welch (EBW) algorithm not only for HMM parameter estimation but for decoding as well.\ud We show that there can exist a general function associated with the objective function under EBW that reduces to the well-known auxiliary function used in the Baum-Welch algorithm for maximum likelihood estimates.\ud We generalize representation for the updates of model parameters by making use of a differentiable function (such as arithmetic or geometric\ud mean) on the updated and current model parameters and describe their effect on the learning rate during HMM parameter estimation. Improvements on speech recognition tasks are also presented here

    Robustness issues in a data-driven spoken language understanding system

    Get PDF
    Robustness is a key requirement in spoken language understanding (SLU) systems. Human speech is often ungrammatical and ill-formed, and there will frequently be a mismatch between training and test data. This paper discusses robustness and adaptation issues in a statistically-based SLU system which is entirely data-driven. To test robustness, the system has been tested on data from the Air Travel Information Service (ATIS) domain which has been artificially corrupted with varying levels of additive noise. Although the speech recognition performance degraded steadily, the system did not fail catastrophically. Indeed, the rate at which the end-to-end performance of the complete system degraded was significantly slower than that of the actual recognition component. In a second set of experiments, the ability to rapidly adapt the core understanding component of the system to a different application within the same broad domain has been tested. Using only a small amount of training data, experiments have shown that a semantic parser based on the Hidden Vector State (HVS) model originally trained on the ATIS corpus can be straightforwardly adapted to the somewhat different DARPA Communicator task using standard adaptation algorithms. The paper concludes by suggesting that the results presented provide initial support to the claim that an SLU system which is statistically-based and trained entirely from data is intrinsically robust and can be readily adapted to new applications

    Combining phonological and acoustic ASR-free features for pathological speech intelligibility assessment

    Get PDF
    Intelligibility is widely used to measure the severity of articulatory problems in pathological speech. Recently, a number of automatic intelligibility assessment tools have been developed. Most of them use automatic speech recognizers (ASR) to compare the patient's utterance with the target text. These methods are bound to one language and tend to be less accurate when speakers hesitate or make reading errors. To circumvent these problems, two different ASR-free methods were developed over the last few years, only making use of the acoustic or phonological properties of the utterance. In this paper, we demonstrate that these ASR-free techniques are also able to predict intelligibility in other languages. Moreover, they show to be complementary, resulting in even better intelligibility predictions when both methods are combined

    Akustisen mallin MAP adaptointi Automaattisessa Puheentunnistuksessa

    Get PDF
    The purpose of the acoustic model in Automatic Speech Recognition system is to model the acoustic properties of the speech. Speech, however, has a lot of internal variation making development of a general acoustic model for all purposes an extremely difficult. Adaptation is used to tune the general acoustic models into a specific task, in order to improve the performance of the system. Maximum A Posteriori (MAP) adaptation is one of the most common acoustic model adaptation techniques in the speech recognition. MAP adaptation scheme in AaltoASR, Automatic Speech Recognition system of Aalto University, was implemented for this thesis. Implementation was tested with speaker adaptation and compared with constrained Maximum Likelihood Linear Regression (MLLR) adaptation to confirm that implementation functions properly. Results were the same as in previous studies, thus it was concluded that implementation is function correctly. Constrained MLLR adaptation performs better when the adaptation set is less than 10 minutes, otherwise MAP adaptation is superior. MAP implementation has other uses besides the adaptation. It successfully reduced the size of the acoustic model while improving the performance. MAP was also used to adapt colloquial language by giving more weight to the chosen corpus after Maximum Likelihood or discriminative training.Puheentunnistimen akustisella mallilla mallinnetaan puheen akustisia ominaisuuksia. Puhetta on kuitenkin monentyylistä ja puhe vaihtelee jopa puhujittain suuresti. Akustisen mallin täytyykin mallintaa puhetta laaja-alaisesti toimiakseen tyydyttävästi arkisissa olosuhteissa. Kaikkiin tilanteisiin soveltuvan akustisen mallin opettaminen ei kuitenkaan ole käytännössä mahdollista. Tästä syystä akustisia malleja viritetään tiettyihin olosuhteisiin esimerkiksi adaptaatiolla. Yksi yleisimmistä adaptaatiomenetelmistä on Maximum A Posteriori (MAP) adaptaatio. Tässä työssä esitellään MAP adaptaation implementoiti AaltoASR puheentunnistusjärjestelmään, ja tutkitaan mihin tarkoituksiin adaptaatiota voidaan soveltaa. MAP adaptaatiota verrattiin Constrained Maximum Likelihood Linear Regression (CMLLR) -adaptaatioon puhuja-adaptaatiokokeessa implementaation toimivuuden varmistamiseksi. Todettiin, että CMLLR adaptaatio suoriutuu paremmin, jos adaptointiaineiston määrä on alle 10 minuuttia. Aineiston ollessa yli 10 minuuttia MAP adaptaatio on puolestaan soveltuvampi valinta, sillä MAP hyötyy adaptointiaineiston kasvusta enemmän kuin CMLLR. Tulokset vastaavat aikaisempia tutkimuksia, joissa MAP ja CMLLR adaptaatiota on verrattu keskenään. Lisäksi huomattiin, että MAP implementointia voidaan käyttää myös akustisen mallin koon pienentämiseen sekä painottamaan tiettyä osaa opetusaineistosta tavallisen Maximum Likelihood tai diskriminatiivisen opetuksen jälkeen. Aineiston painottamismenetelmää testattin puhekielen adaptoimiseen
    corecore