3,716 research outputs found

    Variable-Length Sequence Language Model for Large Vocabulary Continuous Dictation Machine

    Get PDF
    Colloque avec actes et comité de lecture.In natural language, some sequences of words are very frequent. A classical language model, like n-gram, does not adequately take into account such sequences, because it underestimates their probabilities. A better approach consists in modeling word sequences as if they were individual dictionary elements. Sequences are considered as additional entries of the word lexicon, on which language models are computed. In this paper, we present two methods for automatically determining frequent phrases in unlabeled corpora of written sentences. These methods are based on information theoretic criteria which insure a high statistical consistency. Our models reach their local optimum since they minimize the perplexity. One procedure is based only on the n-gram language model to extract word sequences. The second one is based on a class n-gram model trained on 233 classes extracted from the eight grammatical classes of French. Experimental tests, in terms of perplexity and recognition rate, are carried out on a vocabulary of 20000 words and a corpus of 43 million words extracted from the ?Le Monde? newspaper. Our models reduce perplexity by more than 20% compared with n-gram (nR3) and multigram models. In terms of recognition rate, our models outperform n-gram and multigram models

    The Unsupervised Acquisition of a Lexicon from Continuous Speech

    Get PDF
    We present an unsupervised learning algorithm that acquires a natural-language lexicon from raw speech. The algorithm is based on the optimal encoding of symbol sequences in an MDL framework, and uses a hierarchical representation of language that overcomes many of the problems that have stymied previous grammar-induction procedures. The forward mapping from symbol sequences to the speech stream is modeled using features based on articulatory gestures. We present results on the acquisition of lexicons and language models from raw speech, text, and phonetic transcripts, and demonstrate that our algorithm compares very favorably to other reported results with respect to segmentation performance and statistical efficiency.Comment: 27 page technical repor

    Access to recorded interviews: A research agenda

    Get PDF
    Recorded interviews form a rich basis for scholarly inquiry. Examples include oral histories, community memory projects, and interviews conducted for broadcast media. Emerging technologies offer the potential to radically transform the way in which recorded interviews are made accessible, but this vision will demand substantial investments from a broad range of research communities. This article reviews the present state of practice for making recorded interviews available and the state-of-the-art for key component technologies. A large number of important research issues are identified, and from that set of issues, a coherent research agenda is proposed

    Nonparametric Bayesian Double Articulation Analyzer for Direct Language Acquisition from Continuous Speech Signals

    Full text link
    Human infants can discover words directly from unsegmented speech signals without any explicitly labeled data. In this paper, we develop a novel machine learning method called nonparametric Bayesian double articulation analyzer (NPB-DAA) that can directly acquire language and acoustic models from observed continuous speech signals. For this purpose, we propose an integrative generative model that combines a language model and an acoustic model into a single generative model called the "hierarchical Dirichlet process hidden language model" (HDP-HLM). The HDP-HLM is obtained by extending the hierarchical Dirichlet process hidden semi-Markov model (HDP-HSMM) proposed by Johnson et al. An inference procedure for the HDP-HLM is derived using the blocked Gibbs sampler originally proposed for the HDP-HSMM. This procedure enables the simultaneous and direct inference of language and acoustic models from continuous speech signals. Based on the HDP-HLM and its inference procedure, we developed a novel double articulation analyzer. By assuming HDP-HLM as a generative model of observed time series data, and by inferring latent variables of the model, the method can analyze latent double articulation structure, i.e., hierarchically organized latent words and phonemes, of the data in an unsupervised manner. The novel unsupervised double articulation analyzer is called NPB-DAA. The NPB-DAA can automatically estimate double articulation structure embedded in speech signals. We also carried out two evaluation experiments using synthetic data and actual human continuous speech signals representing Japanese vowel sequences. In the word acquisition and phoneme categorization tasks, the NPB-DAA outperformed a conventional double articulation analyzer (DAA) and baseline automatic speech recognition system whose acoustic model was trained in a supervised manner.Comment: 15 pages, 7 figures, Draft submitted to IEEE Transactions on Autonomous Mental Development (TAMD

    MAP Based Speaker Adaptation in Very Large Vocabulary Speech Recognition of Czech

    Get PDF
    The paper deals with the problem of efficient adaptation of speech recognition systems to individual users. The goal is to achieve better performance in specific applications where one known speaker is expected. In our approach we adopt the MAP (Maximum A Posteriori) method for this purpose. The MAP based formulae for the adaptation of the HMM (Hidden Markov Model) parameters are described. Several alternative versions of this method have been implemented and experimentally verified in two areas, first in the isolated-word recognition (IWR) task and later also in the large vocabulary continuous speech recognition (LVCSR) system, both developed for the Czech language. The results show that the word error rate (WER) can be reduced by more than 20% for a speaker who provides tens of words (in case of IWR) or tens of sentences (in case of LVCSR) for the adaptation. Recently, we have used the described methods in the design of two practical applications: voice dictation to a PC and automatic transcription of radio and TV news

    Beyond the Conventional Statistical Language Models: The Variable-Length Sequences Approach

    Get PDF
    Colloque avec actes et comité de lecture. internationale.International audienceIn natural language, several sequences of words are very frequent.A classical language model, like n-gram, does not adequately takeinto account such sequences, because it underestimates theirprobabilities. A better approach consists in modelling wordsequences as if they were individual dictionary elements.Sequences are considered as additional entries of the wordlexicon, on which language models are computed. In this paper,we present an original method for automatically determining themost important phrases in corpora. This method is based oninformation theoretic criteria, which insure a high statisticalconsistency, and on French grammatical classes which includeadditional type of linguistic dependencies. In addition, theperplexity is used in order to make the decision of selecting apotential sequence more accurate. We propose also severalvariants of language models with and without word sequences.Among them, we present a model in which the trigger pairs aremore significant linguistically. The originality of this model,compared with the commonly used trigger approaches, is the useof word sequences to estimate the trigger pair without limitingitself to single words. Experimental tests, in terms of perplexityand recognition rate, are carried out on a vocabulary of 20000words and a corpus of 43 million words. The use of wordsequences proposed by our algorithm reduces perplexity by morethan 16% compared to those, which are limited to single words.The introduction of these word sequences in our dictationmachine improves the accuracy by approximately 15%
    • 

    corecore