142,579 research outputs found

    Spatial Language Learning

    Get PDF
    Spatial language constitutes part of the basic fabric of language. Although languages may have the same number of terms to cover a set of spatial relations, they do not always do so in the same way. Spatial languages differ across languages quite radically, thus providing a real semantic challenge for second language learners. The essay first examines the variables that underpin the comprehension and production of spatial prepositions in English. Then the essay reviews the functional geometric framework for spatial language and a computational model of the framework that grounds spatial language directly in visual routines. Finally, the essay considers the implications of the model for both first and second language acquisition. Keywords: spatial language, second language acquisition, functional, geometri

    Pronoun Processing and Interpretation by L2 Learners of Italian: Perspectives from Cognitive Modelling

    Get PDF
    How do second language learners acquire form-meaning associations in the second language that are inconsistent with their first language? In this study, we focus on subject pronouns in Italian and Dutch. A native speaker of the non-null subject language Dutch learning the null subject language Italian as a second language will not only have to learn to use and comprehend null pronouns, but will also have to learn to use and comprehend overt pronouns differently in the L2 than in the L1. The interpretation of Italian overt pronouns, but not of Dutch overt pronouns or Italian null pronouns, has been argued to require perspective taking, specifically the use of hypotheses about the conversational partner’s communicative choices to guide one’s own choices. Therefore, a related question is how perspective taking and cognitive constraints influence L2 acquisition of such forms. Using computational cognitive modelling, this study explores two learning scenarios. In cognitive model 1, second language acquisition proceeds in the same way as first language acquisition and is based on the same grammar. In cognitive model 2, second language acquisition differs from first language acquisition and involves the construction of a partly different grammar. Our results suggest that the second scenario may be cognitively more plausible than the first one. Furthermore, our models explain why second language learners of Italian perform less native-like on overt pronouns than on null pronouns

    Modelling the development of Dutch Optional Infinitives in MOSAIC.

    Get PDF
    This paper describes a computational model which simulates the change in the use of optional infinitives that is evident in children learning Dutch as their first language. The model, developed within the framework of MOSAIC, takes naturalistic, child directed speech as its input, and analyses the distributional regularities present in the input. It slowly learns to generate longer utterances as it sees more input. We show that the developmental characteristics of Dutch children’s speech (with respect to optional infinitives) are a natural consequence of MOSAIC’s learning mechanisms and the gradual increase in the length of the utterances it produces. In contrast with Nativist approaches to syntax acquisition, the present model does not assume large amounts of innate knowledge in the child, and provides a quantitative process account of the development of optional infinitives

    Bilingual and multilingual mental lexicon: a modeling study with Linear Discriminative Learning

    Get PDF
    This study addresses whether there is anything special about learning a third language, as compared to learning a second language, that results solely from the order of acquisition. We use a computational model based on the mathematical framework of Linear Discriminative Learning to explore this question for the acquisition of a small trilingual vocabulary, with English as L1, German or Mandarin as L2, and Mandarin or Dutch as L3. Our simulations reveal that when qualitative differences emerge between the learning of a first, second and third language, these differences emerge from distributional properties of the particular languages involved rather than the order of acquisition per se, or any difference in learning mechanism. One such property is the number of homophones in each language, since within-language homophones give rise to errors in production. Our simulations also show the importance of suprasegmental information in determining the kinds of production errors made

    Computational and Robotic Models of Early Language Development: A Review

    Get PDF
    We review computational and robotics models of early language learning and development. We first explain why and how these models are used to understand better how children learn language. We argue that they provide concrete theories of language learning as a complex dynamic system, complementing traditional methods in psychology and linguistics. We review different modeling formalisms, grounded in techniques from machine learning and artificial intelligence such as Bayesian and neural network approaches. We then discuss their role in understanding several key mechanisms of language development: cross-situational statistical learning, embodiment, situated social interaction, intrinsically motivated learning, and cultural evolution. We conclude by discussing future challenges for research, including modeling of large-scale empirical data about language acquisition in real-world environments. Keywords: Early language learning, Computational and robotic models, machine learning, development, embodiment, social interaction, intrinsic motivation, self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J. Horst and J. von Koss Torkildsen, Routledg

    Nonparametric Bayesian Double Articulation Analyzer for Direct Language Acquisition from Continuous Speech Signals

    Full text link
    Human infants can discover words directly from unsegmented speech signals without any explicitly labeled data. In this paper, we develop a novel machine learning method called nonparametric Bayesian double articulation analyzer (NPB-DAA) that can directly acquire language and acoustic models from observed continuous speech signals. For this purpose, we propose an integrative generative model that combines a language model and an acoustic model into a single generative model called the "hierarchical Dirichlet process hidden language model" (HDP-HLM). The HDP-HLM is obtained by extending the hierarchical Dirichlet process hidden semi-Markov model (HDP-HSMM) proposed by Johnson et al. An inference procedure for the HDP-HLM is derived using the blocked Gibbs sampler originally proposed for the HDP-HSMM. This procedure enables the simultaneous and direct inference of language and acoustic models from continuous speech signals. Based on the HDP-HLM and its inference procedure, we developed a novel double articulation analyzer. By assuming HDP-HLM as a generative model of observed time series data, and by inferring latent variables of the model, the method can analyze latent double articulation structure, i.e., hierarchically organized latent words and phonemes, of the data in an unsupervised manner. The novel unsupervised double articulation analyzer is called NPB-DAA. The NPB-DAA can automatically estimate double articulation structure embedded in speech signals. We also carried out two evaluation experiments using synthetic data and actual human continuous speech signals representing Japanese vowel sequences. In the word acquisition and phoneme categorization tasks, the NPB-DAA outperformed a conventional double articulation analyzer (DAA) and baseline automatic speech recognition system whose acoustic model was trained in a supervised manner.Comment: 15 pages, 7 figures, Draft submitted to IEEE Transactions on Autonomous Mental Development (TAMD
    • …
    corecore