8 research outputs found

    Morphologically motivated word classes for very large vocabulary speech recognition of Finnish and Estonian

    Get PDF
    We study class-based n-gram and neural network language models for very large vocabulary speech recognition of two morphologically rich languages: Finnish and Estonian. Due to morphological processes such as derivation, inflection and compounding, the models need to be trained with vocabulary sizes of several millions of word types. Class-based language modelling is in this case a powerful approach to alleviate the data sparsity and reduce the computational load. For a very large vocabulary, bigram statistics may not be an optimal way to derive the classes. We thus study utilizing the output of a morphological analyzer to achieve efficient word classes. We show that efficient classes can be learned by refining the morphological classes to smaller equivalence classes using merging, splitting and exchange procedures with suitable constraints. This type of classification can improve the results, particularly when language model training data is not very large. We also extend the previous analyses by rescoring the hypotheses obtained from a very large vocabulary recognizer using class-based neural network language models. We show that despite the fixed vocabulary, carefully constructed classes for word-based language models can in some cases result in lower error rates than subword-based unlimited vocabulary language models.We study class-based n-gram and neural network language models for very large vocabulary speech recognition of two morphologically rich languages: Finnish and Estonian. Due to morphological processes such as derivation, inflection and compounding, the models need to be trained with vocabulary sizes of several millions of word types. Class-based language modelling is in this case a powerful approach to alleviate the data sparsity and reduce the computational load. For a very large vocabulary, bigram statistics may not be an optimal way to derive the classes. We thus study utilizing the output of a morphological analyzer to achieve efficient word classes. We show that efficient classes can be learned by refining the morphological classes to smaller equivalence classes using merging, splitting and exchange procedures with suitable constraints. This type of classification can improve the results, particularly when language model training data is not very large. We also extend the previous analyses by rescoring the hypotheses obtained from a very large vocabulary recognizer using class-based neural network language models. We show that despite the fixed vocabulary, carefully constructed classes for word-based language models can in some cases result in lower error rates than subword-based unlimited vocabulary language models.Peer reviewe

    Sistema de reconocimiento de habla en español con adaptación al discurso

    Get PDF
    Este trabajo presenta un sistema de reconocimiento automático de habla en idioma español de alto desempeño diseñado para cumplir con dos objetivos. En primer lugar, lograr tasas de reconocimiento que sean comparables a los sistemas que son estado del arte en su tipo. En segundo lugar, evaluar el desempeño de un nuevo método de estimación de modelos de lenguaje propuesto en un trabajo anterior por nuestro grupo. Los resultados muestran un porcentaje de reconocimiento cercano al 90% para un vocabulario de 5000 palabras, lo cual es del mismo orden que otros resultados reportados para sistemas de similares características pero en idioma inglés. También se verificó que el modelo de lenguaje basado en el estimador propuesto por nosotros mejora significativamente el desempeño del sistema comparado con otros dos modelos de lenguaje implementados con los mejores algoritmos conocidos.Eje: Workshop Agentes y sistemas inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Error Correction Using Probabilistic Language Models

    Get PDF
    Error Correction has applications in a variety of domains given the prevalence of errors of various kinds and the need to programmatically correct them as accurately as possible. For example, error correction is used in portable mobile devices to fix typographical errors while taking input from the keypads. It can also be useful in lower level applications – to fix errors in storage media or to fix network transmission errors. The precision and the influence of such techniques can vary based on requirements and the capabilities of the correction technique but they essentially form a part of the application for its effective functioning. The research primarily focuses on various techniques to provide error correction given the location of the erroneous token. The errors are essentially Erasures which are missing bits in a stream of binary data, the locations of which are known. The basic idea behind these techniques lies in building up contextual information from an error-free training corpora and using these models, provide alternative suggestions which could replace the erroneous tokens. We look into two models - the topic-based LDA (Latent Dirichlet Allocation) model and the N-Gram model. We also propose an efficient mechanism to process such errors which offers exponential speed-ups. Using these models, we are able to achieve up to 5% improvement in accuracy as compared to a standard word distribution model using minimal domain knowledge

    Beyond Academics: A Correlational Study of Social Emotional Learning Competency Growth and Student Demographic Data

    Get PDF
    A growing body of evidence highlights the relationship between youth attainment of social emotional learning (SEL) competencies and school outcomes such as academic performance, school attendance, school attainment, behavioral problems in school, and persistence of antisocial behavior. A lack of clear diversity in student populations in prior research leads to questions regarding growth in social emotional learning competencies in diverse populations. Given disparities in academic performance between low-income minority students and their affluent peers and the potential impact of SEL on student outcomes, this topic deserves further exploration. The present study explored social emotional learning competency growth in youth investigating the following questions: To what extent do student demographic and socio-cultural characteristics (gender, age, enrollment in special education, English language learner status, free/reduced lunch status) relate to social emotional learning competency growth? In what way does a student’s grade level relate to their rate of social emotional learning competency growth? This correlational study explored SEL competency data collected by a large charter school network via the Panorama Social Emotional Survey, an open-source assessment that measured student social emotional skills and mindsets via student self-report. The survey was conducted twice, once in the fall and again in the spring. Multiple regression analysis and One-way Analysis of Variance (ANOVA) tests were used to answer the research questions. Regression models were tested with three measures of social emotional learning competencies in spring as dependent variables, demographic and socio-cultural characteristics as independent variables, and fall scores included as control variables. Data suggests positive associations between development in all 3 SEL competencies and age. Student grade level, gender, enrollment in special education, enrollment in free and reduced lunch programming, and English language learner status each had significant positive or negative associations with one to two measures of SEL competency growth. Implications for study findings include expanded research into other SEL competencies and associations with socio-cultural and demographic characteristics, development of SEL interventions, and applications of social emotional learning competency training in diverse and historically underrepresented populations

    A theory of (almost) zero resource speech recognition

    Get PDF
    Automatic speech recognition has matured into a commercially successful technology, enabling voice-based interfaces for smartphones, smart TVs, and many other consumer devices. The overwhelming popularity, however, is still limited to languages such as English, Japanese, and German, where vast amounts of labeled training data are available. For most other languages, it is prohibitively expensive to 1) collect and transcribe the speech data required to learn good acoustic models; and 2) acquire adequate text to estimate meaningful language models. A theory of unsupervised and semi-supervised techniques for speech recognition is therefore essential. This thesis focuses on HMM-based sequence clustering and examines acoustic modeling, language modeling, and applications beyond the components of an ASR, such as anomaly detection, from the vantage point of PAC-Bayesian theory. The first part of this thesis extends standard PAC-Bayesian bounds to address the sequential nature of speech and language signals. A novel algorithm, based on sparsifying the cluster assignment probabilities with a Renyi entropy prior, is shown to provably minimize the generalization error of any probabilistic model (e.g. HMMs). The second part examines application-specific loss functions such as cluster purity and perplexity. Empirical results on a variety of tasks -- acoustic event detection, class-based language modeling, and unsupervised sequence anomaly detection -- confirm the practicality of the theory and algorithms developed in this thesis

    Adaptation and Augmentation: Towards Better Rescoring Strategies for Automatic Speech Recognition and Spoken Term Detection

    Full text link
    Selecting the best prediction from a set of candidates is an essential problem for many spoken language processing tasks, including automatic speech recognition (ASR) and spoken keyword spotting (KWS). Generally, the selection is determined by a confidence score assigned to each candidate. Calibrating these confidence scores (i.e., rescoring them) could make better selections and improve the system performance. This dissertation focuses on using tailored language models to rescore ASR hypotheses as well as keyword search results for ASR-based KWS. This dissertation introduces three kinds of rescoring techniques: (1) Freezing most model parameters while fine-tuning the output layer in order to adapt neural network language models (NNLMs) from the written domain to the spoken domain. Experiments on a large-scale Italian corpus show a 30.2% relative reduction in perplexity at the word-cluster level and a 2.3% relative reduction in WER in a state-of-the-art Italian ASR system. (2) Incorporating source application information associated with speech queries. By exploring a range of adaptation model architectures, we achieve a 21.3% relative reduction in perplexity compared to a fine-tuned baseline. Initial experiments using a state-of-the-art Italian ASR system show a 3.0% relative reduction in WER on top of an unadapted 5-gram LM. In addition, human evaluations show significant improvements by using the source application information. (3) Marrying machine learning algorithms (classification and ranking) with a variety of signals to rescore keyword search results in the context of KWS for low-resource languages. These systems, built for the IARPA BABEL Program, enhance search performance in terms of maximum term-weighted value (MTWV) across six different low-resource languages: Vietnamese, Tagalog, Pashto, Turkish, Zulu and Tamil

    Modeling language with structured penalties

    Get PDF
    La modélisation de la langue naturelle est l¿un des défis fondamentaux de l¿intelligence artificielle et de la conception de systèmes interactifs, avec applications dans les systèmes de dialogue, la génération de texte et la traduction automatique. Nous proposons un modèle log-linéaire discriminatif donnant la distribution des mots qui suivent un contexte donné. En raison de la parcimonie des données, nous proposons un terme de pénalité qui code correctement la structure de l¿espace fonctionnel pour éviter le sur-apprentissage et d¿améliorer la généralisation, tout en capturant de manière appropriée les dépendances à long terme. Le résultat est un modèle efficace qui capte suffisamment les dépendances longues sans occasionner une forte augmentation des ressources en espace ou en temps. Dans un modèle log-linéaire, les phases d¿apprentissage et de tests deviennent de plus en plus chères avec un nombre croissant de classes. Le nombre de classes dans un modèle de langue est la taille du vocabulaire, qui est généralement très importante. Une astuce courante consiste à appliquer le modèle en deux étapes: la première étape identifie le cluster le plus probable et la seconde prend le mot le plus probable du cluster choisi. Cette idée peut être généralisée à une hiérarchie de plus grande profondeur avec plusieurs niveaux de regroupement. Cependant, la performance du système de classification hiérarchique qui en résulte dépend du domaine d¿application et de la construction d¿une bonne hiérarchie. Nous étudions différentes stratégies pour construire la hiérarchie des catégories de leurs observations.Modeling natural language is among fundamental challenges of artificial intelligence and the design of interactive machines, with applications spanning across various domains, such as dialogue systems, text generation and machine translation. We propose a discriminatively trained log-linear model to learn the distribution of words following a given context. Due to data sparsity, it is necessary to appropriately regularize the model using a penalty term. We design a penalty term that properly encodes the structure of the feature space to avoid overfitting and improve generalization while appropriately capturing long range dependencies. Some nice properties of specific structured penalties can be used to reduce the number of parameters required to encode the model. The outcome is an efficient model that suitably captures long dependencies in language without a significant increase in time or space requirements. In a log-linear model, both training and testing become increasingly expensive with growing number of classes. The number of classes in a language model is the size of the vocabulary which is typically very large. A common trick is to cluster classes and apply the model in two-steps; the first step picks the most probable cluster and the second picks the most probable word from the chosen cluster. This idea can be generalized to a hierarchy of larger depth with multiple levels of clustering. However, the performance of the resulting hierarchical classifier depends on the suitability of the clustering to the problem. We study different strategies to build the hierarchy of categories from their observations.PARIS-JUSSIEU-Bib.électronique (751059901) / SudocSudocFranceF

    Performance Prediction for Exponential Language Models

    Get PDF
    We investigate the task of performance prediction for language models belonging to the exponential family. First, we attempt to empirically discover a formula for predicting test set cross-entropy for n-gram language models. We build models over varying domains, data set sizes, and n-gram orders, and perform linear regression to see whether we can model test set performance as a simple function of training set performance and various model statistics. Remarkably, we find a simple relationship that predicts test set performance with a correlation of 0.9997. We analyze why this relationship holds and show that it holds for other exponential language models as well, including class-based models and minimum discrimination information models. Finally, we discuss how this relationship can be applied to improve language model performance.
    corecore