1,054 research outputs found

    Large-scale random forest language models for speech recognition

    Get PDF
    The random forest language model (RFLM) has shown encouraging results in several automatic speech recognition (ASR) tasks but has been hindered by practical limitations, notably the space-complexity of RFLM estimation from large amounts of data. This paper addresses large-scale training and testing of the RFLM via an efficient disk-swapping strategy that exploits the recursive structure of a binary decision tree and the local access property of the tree-growing algorithm, redeeming the full potential of the RFLM, and opening avenues of further research, including useful comparisons with n-gram models. Benefits of this strategy are demonstrated by perplexity reduction and lattice rescoring experiments using a state-of-the-art ASR system. Index Terms: random forest language model, large-scale training, data scaling, speech recognitio

    Dynamic resonance and explicit dialogic engagement in Mandarin first language acquisition

    Get PDF
    The present paper aims to shed light on the relationship between priming and creativity throughout Chinese children’s ontogenetic development. It has been suggested that priming in naturalistic interaction occurs not as an exclusively implicit phenomenon. New methodological desiderata beyond traditional acceptability judgements have been proposed, including large-scale corpus-based analysis (cf. Branigan & Pickering 2017; Lester et al. 2017), as it is noted that priming may correlate with interlocutors’ engagement and intersubjectivity throughout naturalistic interaction (Authors 2021b). This study is centred on priming occurring creatively, in the form of dynamic resonance, viz. involving the re-elaboration ‘on the fly’ of a previously encountered construction. We fitted a conditional inference tree and mixed effects linear regression based on the normalised entirety of Child-Carer/Child-Peer interaction of the Zhou2 and Zhou3 Mandarin corpora of first language acquisition (cf. Li & Zhou 2004; Zhang & Zhou 2009), from 8 months to 5 years of age. The models indicate that children significantly acquire the ability to creatively re-use a constructional prime around age 4, distinctively in combination with sentence final particles of intersubjectivity (cf. Author 2017, 2018, 2020). The latter are non obligatory markers that speakers employ to express their concern about the addressee’s reaction to an ongoing utterance. These results constitute a fundamental discovery in the research on priming, as they indicate that the ability to creatively re-use a prime is ontogenetically correlated with explicit dialogic engagement

    Verbing and nouning in French : toward an ecologically valid approach to sentence processing

    Full text link
    La présente thèse utilise la technique des potentiels évoqués afin d’étudier les méchanismes neurocognitifs qui sous-tendent la compréhension de la phrase. Plus particulièrement, cette recherche vise à clarifier l’interaction entre les processus syntaxiques et sémantiques chez les locuteurs natifs et les apprenants d’une deuxième langue (L2). Le modèle “syntaxe en premier” (Friederici, 2002, 2011) prédit que les catégories syntaxiques sont analysées de façon précoce: ce stade est reflété par la composante ELAN (Early anterior negativity, Négativité antérieure gauche), qui est induite par les erreurs de catégorie syntaxique. De plus, ces erreurs semblent empêcher l’apparition de la composante N400 qui reflète les processus lexico-sémantiques. Ce phénomène est défini comme le bloquage sémantique (Friederici et al., 1999). Cependant, la plupart des études qui observent la ELAN utilisent des protocoles expérimentaux problématiques dans lesquels les différences entre les contextes qui précèdent la cible pourraient être à l’origine de résultats fallacieux expliquant à la fois l’apparente “ELAN” et l’absence de N400 (Steinhauer & Drury, 2012). La première étude rééevalue l’approche de la “syntaxe en premier” en adoptant un paradigme expériemental novateur en français qui introduit des erreurs de catégorie syntaxique et les anomalies de sémantique lexicale. Ce dessin expérimental équilibré contrôle à la fois le mot-cible (nom vs. verbe) et le contexte qui le précède. Les résultats récoltés auprès de locuteurs natifs du français québécois ont révélé un complexe N400-P600 en réponse à toutes les anomalies, en contradiction avec les prédictions du modèle de Friederici. Les effets additifs des manipulations syntaxique et sémantique sur la N400 suggèrent la détection d’une incohérence entre la racine du mot qui avait été prédite et la cible, d’une part, et l’activation lexico-sémantique, d’autre part. Les réponses individuelles se sont pas caractérisées par une dominance vers la N400 ou la P600: au contraire, une onde biphasique est présente chez la majorité des participants. Cette activation peut donc être considérée comme un index fiable des mécanismes qui sous-tendent le traitement des structures syntagmatiques. La deuxième étude se concentre sur les même processus chez les apprenants tardifs du français L2. L’hypothèse de la convergence (Green, 2003 ; Steinhauer, 2014) prédit que les apprenants d’une L2, s’ils atteignent un niveau avancé, mettent en place des processus de traitement en ligne similaires aux locuteurs natifs. Cependant, il est difficile de considérer en même temps un grand nombre de facteurs qui se rapportent à leurs compétences linguistiques, à l’exposition à la L2 et à l’âge d’acquisition. Cette étude continue d’explorer les différences inter-individuelles en modélisant les données de potentiels-évoqués avec les Forêts aléatoires, qui ont révélé que le pourcentage d’explosition au français ansi que le niveau de langue sont les prédicteurs les plus fiables pour expliquer les réponses électrophysiologiques des participants. Plus ceux-ci sont élevés, plus l’amplitude des composantes N400 et P600 augmente, ce qui confirme en partie les prédictions faites par l’hypothèse de la convergence. En conclusion, le modèle de la “syntaxe en premier” n’est pas viable et doit être remplacé. Nous suggérons un nouveau paradigme basé sur une approche prédictive, où les informations sémantiques et syntaxiques sont activées en parallèle dans un premier temps, puis intégrées via un recrutement de mécanismes contrôlés. Ces derniers sont modérés par les capacités inter-individuelles reflétées par l’exposition et la performance.The present thesis uses event-related potentials (ERPs) to investigate neurocognitve mechanisms underlying sentence comprehension. In particular, these two experiments seek to clarify the interplay between syntactic and semantic processes in native speakers and second language learners. Friederici’s (2002, 2011) “syntax-first” model predicts that syntactic categories are analyzed at the earliest stages of speech perception reflected by the ELAN (Early left anterior negativity), reported for syntactic category violations. Further, syntactic category violations seem to prevent the appearance of N400s (linked to lexical-semantic processing), a phenomenon known as “semantic blocking” (Friederici et al., 1999). However, a review article by Steinhauer and Drury (2012) argued that most ELAN studies used flawed designs, where pre-target context differences may have caused ELAN-like artifacts as well as the absence of N400s. The first study reevaluates syntax-first approaches to sentence processing by implementing a novel paradigm in French that included correct sentences, pure syntactic category violations, lexical-semantic anomalies, and combined anomalies. This balanced design systematically controlled for target word (noun vs. verb) and the context immediately preceding it. Group results from native speakers of Quebec French revealed an N400-P600 complex in response to all anomalous conditions, providing strong evidence against the syntax-first and semantic blocking hypotheses. Additive effects of syntactic category and lexical-semantic anomalies on the N400 may reflect a mismatch detection between a predicted word-stem and the actual target, in parallel with lexical-semantic retrieval. An interactive rather than additive effect on the P600 reveals that the same neurocognitive resources are recruited for syntactic and semantic integration. Analyses of individual data showed that participants did not rely on one single cognitive mechanism reflected by either the N400 or the P600 effect but on both, suggesting that the biphasic N400-P600 ERP wave can indeed be considered to be an index of phrase-structure violation processing in most individuals. The second study investigates the underlying mechanisms of phrase-structure building in late second language learners of French. The convergence hypothesis (Green, 2003; Steinhauer, 2014) predicts that second language learners can achieve native-like online- processing with sufficient proficiency. However, considering together different factors that relate to proficiency, exposure, and age of acquisition has proven challenging. This study further explores individual data modeling using a Random Forests approach. It revealed that daily usage and proficiency are the most reliable predictors in explaining the ERP responses, with N400 and P600 effects getting larger as these variables increased, partly confirming and extending the convergence hypothesis. This thesis demonstrates that the “syntax-first” model is not viable and should be replaced. A new account is suggested, based on predictive approaches, where semantic and syntactic information are first used in parallel to facilitate retrieval, and then controlled mechanisms are recruited to analyze sentences at the interface of syntax and semantics. Those mechanisms are mediated by inter-individual abilities reflected by language exposure and performance

    Getting Past the Language Gap: Innovations in Machine Translation

    Get PDF
    In this chapter, we will be reviewing state of the art machine translation systems, and will discuss innovative methods for machine translation, highlighting the most promising techniques and applications. Machine translation (MT) has benefited from a revitalization in the last 10 years or so, after a period of relatively slow activity. In 2005 the field received a jumpstart when a powerful complete experimental package for building MT systems from scratch became freely available as a result of the unified efforts of the MOSES international consortium. Around the same time, hierarchical methods had been introduced by Chinese researchers, which allowed the introduction and use of syntactic information in translation modeling. Furthermore, the advances in the related field of computational linguistics, making off-the-shelf taggers and parsers readily available, helped give MT an additional boost. Yet there is still more progress to be made. For example, MT will be enhanced greatly when both syntax and semantics are on board: this still presents a major challenge though many advanced research groups are currently pursuing ways to meet this challenge head-on. The next generation of MT will consist of a collection of hybrid systems. It also augurs well for the mobile environment, as we look forward to more advanced and improved technologies that enable the working of Speech-To-Speech machine translation on hand-held devices, i.e. speech recognition and speech synthesis. We review all of these developments and point out in the final section some of the most promising research avenues for the future of MT

    Modeling Dependencies in Natural Languages with Latent Variables

    Get PDF
    In this thesis, we investigate the use of latent variables to model complex dependencies in natural languages. Traditional models, which have a fixed parameterization, often make strong independence assumptions that lead to poor performance. This problem is often addressed by incorporating additional dependencies into the model (e.g., using higher order N-grams for language modeling). These added dependencies can increase data sparsity and/or require expert knowledge, together with trial and error, in order to identify and incorporate the most important dependencies (as in lexicalized parsing models). Traditional models, when developed for a particular genre, domain, or language, are also often difficult to adapt to another. In contrast, previous work has shown that latent variable models, which automatically learn dependencies in a data-driven way, are able to flexibly adjust the number of parameters based on the type and the amount of training data available. We have created several different types of latent variable models for a diverse set of natural language processing applications, including novel models for part-of-speech tagging, language modeling, and machine translation, and an improved model for parsing. These models perform significantly better than traditional models. We have also created and evaluated three different methods for improving the performance of latent variable models. While these methods can be applied to any of our applications, we focus our experiments on parsing. The first method involves self-training, i.e., we train models using a combination of gold standard training data and a large amount of automatically labeled training data. We conclude from a series of experiments that the latent variable models benefit much more from self-training than conventional models, apparently due to their flexibility to adjust their model parameterization to learn more accurate models from the additional automatically labeled training data. The second method takes advantage of the variability among latent variable models to combine multiple models for enhanced performance. We investigate several different training protocols to combine self-training with model combination. We conclude that these two techniques are complementary to each other and can be effectively combined to train very high quality parsing models. The third method replaces the generative multinomial lexical model of latent variable grammars with a feature-rich log-linear lexical model to provide a principled solution to address data sparsity, handle out-of-vocabulary words, and exploit overlapping features during model induction. We conclude from experiments that the resulting grammars are able to effectively parse three different languages. This work contributes to natural language processing by creating flexible and effective latent variable models for several different languages. Our investigation of self-training, model combination, and log-linear models also provides insights into the effective application of these machine learning techniques to other disciplines

    A Sound Approach to Language Matters: In Honor of Ocke-Schwen Bohn

    Get PDF
    The contributions in this Festschrift were written by Ocke’s current and former PhD-students, colleagues and research collaborators. The Festschrift is divided into six sections, moving from the smallest building blocks of language, through gradually expanding objects of linguistic inquiry to the highest levels of description - all of which have formed a part of Ocke’s career, in connection with his teaching and/or his academic productions: “Segments”, “Perception of Accent”, “Between Sounds and Graphemes”, “Prosody”, “Morphology and Syntax” and “Second Language Acquisition”. Each one of these illustrates a sound approach to language matters
    corecore