17 research outputs found

    Knowing How You Know: Toddlers Reevaluate Words Learned From an Unreliable Speaker

    Get PDF
    There has been little investigation of the way source monitoring, the ability to track the source of one’s knowledge, may be involved in lexical acquisition. In two experiments, we tested whether toddlers (mean age 30 months) can monitor the source of their lexical knowledge and reevaluate their implicit belief about a word mapping when this source is proven to be unreliable. Experiment 1 replicated previous research (Koenig & Woodward, 2010): children displayed better performance in a word learning test when they learned words from a speaker who has previously revealed themself as reliable (correctly labeling familiar objects) as opposed to an unreliable labeler (incorrectly labeling familiar objects). Experiment 2 then provided the critical test for source monitoring: children first learned novel words from a speaker before watching that speaker labeling familiar objects correctly or incorrectly. Children who were exposed to the reliable speaker were significantly more likely to endorse the word mappings taught by the speaker than children who were exposed to a speaker who they later discovered was an unreliable labeler. Thus, young children can reevaluate recently learned word mappings upon discovering that the source of their knowledge is unreliable. This suggests that children can monitor the source of their knowledge in order to decide whether that knowledge is justified, even at an age where they are not credited with the ability to verbally report how they have come to know what they know

    Competition and symmetry in an artificial word learning task

    Get PDF
    Natural language involves competition. The sentences we choose to utter activate alternative sentences (those we chose not to utter), which hearers typically infer to be false. Hence, as a first approximation, the more alternatives a sentence activates, the more inferences it will trigger. But a closer look at the theory of competition shows that this is not quite true and that under specific circumstances, so-called symmetric alternatives cancel each other out. We present an artificial word learning experiment in which participants learn words that may enter into competition with one another. The results show that a mechanism of competition takes place, and that the subtle prediction that alternatives trigger inferences, and may stop triggering them after a point due to symmetry, is borne out. This study provides a minimal testing paradigm to reveal competition and some of its subtle characteristics in human languages and beyond

    Toddlers default to canonical surface-to-meaning mapping when learning verbs

    No full text
    This work was supported by grants from the French Agence Nationale de la Recherche (ANR-2010-BLAN-1901) and from French Fondation de France to Anne Christophe, from the National Institute of Child Health and Human Development (HD054448) to Cynthia Fisher, Fondation Fyssen and Ecole de Neurosciences de Paris to Alex Cristia, and a PhD fellowship from the Direction Générale de l'Armement (DGA, France) supported by the PhD program FdV (FrontiÚres du Vivant) to Isabelle Dautriche. We thank Isabelle Brunet for the recruitment, Michel Dutat for the technical support, and Hernan Anllo for his puppet mastery skill. We are grateful to the families that participated in this study. We also thank two anonymous reviewers for their comments on an earlier draft of this manuscript

    Word forms are structured for efficient use

    Get PDF
    Zipf famously stated that, if natural language lexicons are structured for efficient communication, the words that are used the most frequently should require the least effort. This observation explains the famous finding that the most frequent words in a language tend to be short. A related prediction is that, even within words of the same length, the most frequent word forms should be the ones that are easiest to produce and understand. Using orthographics as a proxy for phonetics, we test this hypothesis using corpora of 96 languages from Wikipedia. We find that, across a variety of languages and language families and controlling for length, the most frequent forms in a language tend to be more orthographically well‐formed and have more orthographic neighbors than less frequent forms. We interpret this result as evidence that lexicons are structured by language usage pressures to facilitate efficient communication. Keywords: Lexicon; Word frequency; Phonology; Communication; EfficiencyNational Science Foundation (Grant ES/N0174041/1

    Words cluster phonetically beyond phonotactic regularities

    Get PDF
    Recent evidence suggests that cognitive pressures associated with language acquisition and use could affect the organization of the lexicon. On one hand, consistent with noisy channel models of language (e.g., Levy, 2008), the phonological distance between wordforms should be maximized to avoid perceptual confusability (a pressure for dispersion). On the other hand, a lexicon with high phonological regularity would be simpler to learn, remember and produce (e.g., Monaghan et al., 2011) (a pressure for clumpiness). Here we investigate wordform similarity in the lexicon, using measures of word distance (e.g., phonological neighborhood density) to ask whether there is evidence for dispersion or clumpiness of wordforms in the lexicon. We develop a novel method to compare lexicons to phonotactically-controlled baselines that provide a null hypothesis for how clumpy or sparse wordforms would be as the result of only phonotactics. Results for four languages, Dutch, English, German and French, show that the space of monomorphemic wordforms is clumpier than what would be expected by the best chance model according to a wide variety of measures: minimal pairs, average Levenshtein distance and several network properties. This suggests a fundamental drive for regularity in the lexicon that conflicts with the pressure for words to be as phonologically distinct as possible. Keywords: Linguistics; Lexical design; Communication; Phonotactic

    Do two and three year old children use an incremental first-NP-as-agent bias to process active transitive and passive sentences? : A permutation analysis

    Get PDF
    We used eye-tracking to investigate if and when children show an incremental bias to assume that the first noun phrase in a sentence is the agent (first-NP-as-agent bias) while processing the meaning of English active and passive transitive sentences. We also investi-gated whether children can override this bias to successfully distinguish active from passive sentences, after processing the remainder of the sentence frame. For this second question we used eye-tracking (Study 1) and forced-choice pointing (Study 2). For both studies, we used a paradigm in which participants simultaneously saw two novel actions with reversed agent-patient relations while listening to active and passive sentences. We compared English-speaking 25-month-olds and 41-month-olds in between-subjects sentence struc-ture conditions (Active Transitive Condition vs. Passive Condition). A permutation analysis found that both age groups showed a bias to incrementally map the first noun in a sentence onto an agent role. Regarding the second question, 25-month-olds showed some evidence of distinguishing the two structures in the eye-tracking study. However, the 25-month-olds did not distinguish active from passive sentences in the forced choice pointing task. In contrast, the 41-month-old children did reanalyse their initial first-NP-as-agent bias to the extent that they clearly distinguished between active and passive sentences both in the eye-tracking data and in the pointing task. The results are discussed in relation to the development of syntactic (re)parsing

    Competition and Symmetry in an Artificial Word Learning Task, 2016-2019

    No full text
    Natural language involves competition. The sentences we choose to utter activate alternative sentences (those we chose not to utter), which hearers typically infer to be false. Hence, as a first approximation, the more alternatives a sentence activates, the more inferences it will trigger. But a closer look at the theory of competition shows that this is not quite true and that under specific circumstances, so-called symmetric alternatives cancel each other out. We present an artificial word learning experiment in which participants learn words that may enter into competition with one another. The results show that a mechanism of competition takes place, and that the subtle prediction that alternatives trigger inferences, and may stop triggering them after a point due to symmetry, is borne out. This study provides a minimal testing paradigm to reveal competition and some of its subtle characteristics in human languages and beyond.As anyone who has learnt a foreign language or travelled abroad will have noticed, languages differ in the sounds they employ, the names they give to things, and the rules of grammar. However, linguists have long observed that, beneath this surface diversity, all human languages share a number of fundamental structural similarities. Most obviously, all languages use sounds, all languages have words, and all languages have a grammar. More subtly and more surprisingly, similarities can also be observed in more fine-grained linguistic features: for instance, George Zipf famously observed that, across multiple languages, short words tend also to be more frequent, and in my own recent work I have shown that languages prefer to use words that sound alike (e.g., cat, mat, rat, bat, fat, ...). Why do all languages exhibit these shared features? This project aims to tackle exactly this key question by studying how languages are shaped by the human mind. In particular, I will explore how the way we learn language and use it to communicate drives the emergence of important features of lexicons, the set of all words in a language. To simulate the process of language change and evolution in the lab, I will use an experimental paradigm where an artificial language is passed between learners (language learning), and used by individuals to communicate with each other (language use). This paradigm has been successfully applied in previous research showing that key structural features of language can be explained as a consequence of repeated learning and use; my contribution will be to apply the same methods to study the evolution of the lexicon. I will then use two complementary techniques to evaluate the ecological validity of these results. First, do the artificial lexicons obtained after repeated learning and communication match the structure of lexicons found in real human languages? We will assess this by analyzing real natural language corpora using computational methods. Second, are these lexicons easily learnable by young children, the primary conduit of natural language transmission in the wild? This will be assessed using methods from developmental psychology to study word learning in toddlers. The present project requires an unprecedented integration of techniques and concepts from language evolution, computational linguistics and developmental psychology, three fields that have so far worked independently to understand the structure of language. The outcomes of the project will be of vital interest for all these communities, and will provide insights into the foundational properties found in all human languages, as well as the nature of the constraints underlying language processing and language acquisition. This project will provide a springboard for my future work at the intersection of computational and experimental approaches to language and cognitive development.</p

    Experimental Data in Baboons (Papio Papio), 2016-2019

    No full text
    Using a pattern extraction task, we show that baboons, like humans, have a learning bias that helps them discover connected patterns more easily than disconnected ones—i.e., they favor rules like “contains between 40% and 80% red” over rules like “contains around 30% red or 100% red.” The task was made as similar as possible to a task previously run on humans, which was argued to reveal a bias that is responsible for shaping the lexicons of human languages, both content words (nouns and adjec- tives) and logical words (quantifiers). The current baboon result thus suggests that the cognitive roots responsible for regularities across the content and logical lexicons of human languages are present in a similar form in other species.As anyone who has learnt a foreign language or travelled abroad will have noticed, languages differ in the sounds they employ, the names they give to things, and the rules of grammar. However, linguists have long observed that, beneath this surface diversity, all human languages share a number of fundamental structural similarities. Most obviously, all languages use sounds, all languages have words, and all languages have a grammar. More subtly and more surprisingly, similarities can also be observed in more fine-grained linguistic features: for instance, George Zipf famously observed that, across multiple languages, short words tend also to be more frequent, and in my own recent work I have shown that languages prefer to use words that sound alike (e.g., cat, mat, rat, bat, fat, ...). Why do all languages exhibit these shared features? This project aims to tackle exactly this key question by studying how languages are shaped by the human mind. In particular, I will explore how the way we learn language and use it to communicate drives the emergence of important features of lexicons, the set of all words in a language. To simulate the process of language change and evolution in the lab, I will use an experimental paradigm where an artificial language is passed between learners (language learning), and used by individuals to communicate with each other (language use). This paradigm has been successfully applied in previous research showing that key structural features of language can be explained as a consequence of repeated learning and use; my contribution will be to apply the same methods to study the evolution of the lexicon. I will then use two complementary techniques to evaluate the ecological validity of these results. First, do the artificial lexicons obtained after repeated learning and communication match the structure of lexicons found in real human languages? We will assess this by analyzing real natural language corpora using computational methods. Second, are these lexicons easily learnable by young children, the primary conduit of natural language transmission in the wild? This will be assessed using methods from developmental psychology to study word learning in toddlers. The present project requires an unprecedented integration of techniques and concepts from language evolution, computational linguistics and developmental psychology, three fields that have so far worked independently to understand the structure of language. The outcomes of the project will be of vital interest for all these communities, and will provide insights into the foundational properties found in all human languages, as well as the nature of the constraints underlying language processing and language acquisition. This project will provide a springboard for my future work at the intersection of computational and experimental approaches to language and cognitive development.</p

    Knowing How You Know: Toddlers Re-evaluate Words Learnt from an Unreliable Speaker, 2016-2019

    No full text
    There has been little investigation of the way source monitoring, the ability to track the source of one’s knowledge, may be involved in lexical acquisition. In two experiments, we tested whether toddlers (mean age 30 months) can monitor the source of their lexical knowledge and reevaluate their implicit belief about a word mapping when this source is proven to be unreliable. Experiment 1 replicated previous research (Koenig & Woodward, 2010): children displayed better performance in a word learning test when they learned words from a speaker who has previously revealed themself as reliable (correctly labeling familiar objects) as opposed to an unreliable labeler (incorrectly labeling familiar objects). Experiment 2 then provided the critical test for source monitoring: children first learned novel words from a speaker before watching that speaker labeling familiar objects correctly or incorrectly. Children who were exposed to the reliable speaker were significantly more likely to endorse the word mappings taught by the speaker than children who were exposed to a speaker who they later discovered was an unreliable labeler. Thus, young children can reevaluate recently learned word mappings upon discovering that the source of their knowledge is unreliable. This suggests that children can monitor the source of their knowledge in order to decide whether that knowledge is justified, even at an age where they are not credited with the ability to verbally report how they have come to know what they know.As anyone who has learnt a foreign language or travelled abroad will have noticed, languages differ in the sounds they employ, the names they give to things, and the rules of grammar. However, linguists have long observed that, beneath this surface diversity, all human languages share a number of fundamental structural similarities. Most obviously, all languages use sounds, all languages have words, and all languages have a grammar. More subtly and more surprisingly, similarities can also be observed in more fine-grained linguistic features: for instance, George Zipf famously observed that, across multiple languages, short words tend also to be more frequent, and in my own recent work I have shown that languages prefer to use words that sound alike (e.g., cat, mat, rat, bat, fat, ...). Why do all languages exhibit these shared features? This project aims to tackle exactly this key question by studying how languages are shaped by the human mind. In particular, I will explore how the way we learn language and use it to communicate drives the emergence of important features of lexicons, the set of all words in a language. To simulate the process of language change and evolution in the lab, I will use an experimental paradigm where an artificial language is passed between learners (language learning), and used by individuals to communicate with each other (language use). This paradigm has been successfully applied in previous research showing that key structural features of language can be explained as a consequence of repeated learning and use; my contribution will be to apply the same methods to study the evolution of the lexicon. I will then use two complementary techniques to evaluate the ecological validity of these results. First, do the artificial lexicons obtained after repeated learning and communication match the structure of lexicons found in real human languages? We will assess this by analyzing real natural language corpora using computational methods. Second, are these lexicons easily learnable by young children, the primary conduit of natural language transmission in the wild? This will be assessed using methods from developmental psychology to study word learning in toddlers. The present project requires an unprecedented integration of techniques and concepts from language evolution, computational linguistics and developmental psychology, three fields that have so far worked independently to understand the structure of language. The outcomes of the project will be of vital interest for all these communities, and will provide insights into the foundational properties found in all human languages, as well as the nature of the constraints underlying language processing and language acquisition. This project will provide a springboard for my future work at the intersection of computational and experimental approaches to language and cognitive development.</p

    Subjective Confidence Influences Word Learning in a Cross-situational Statistical Learning Task, 2016-2019

    No full text
    Learning is often accompanied by a subjective sense of confidence in one's knowledge, a feeling of knowing what you know and how well you know it. Subjective confidence has been shown to guide learning in other domains, but has received little attention so far in the word learning literature. Across three word learning experiments, we investigated whether and how a sense of confidence in having acquired a word meaning influences the word learning process itself. First, we show evidence for a confirmation bias during word learning in a cross-situational statistical learning task: Learners who are highly confident they know the meaning of a word are more likely to persist in their belief than learners who are not, even after observing objective evidence disconfirming their belief. Second, we show that subjective confidence in a word-meaning modulates inferential processes based on that word, affecting learning over the whole lexicon: Learners who hold high confidence in a word-meaning are more likely to use that word to make mutual exclusivity inferences about the meaning of other words. We conclude that confidence influences word learning by modulating both information selection processes and inferential processes and discuss the implications of these results for word learning models.As anyone who has learnt a foreign language or travelled abroad will have noticed, languages differ in the sounds they employ, the names they give to things, and the rules of grammar. However, linguists have long observed that, beneath this surface diversity, all human languages share a number of fundamental structural similarities. Most obviously, all languages use sounds, all languages have words, and all languages have a grammar. More subtly and more surprisingly, similarities can also be observed in more fine-grained linguistic features: for instance, George Zipf famously observed that, across multiple languages, short words tend also to be more frequent, and in my own recent work I have shown that languages prefer to use words that sound alike (e.g., cat, mat, rat, bat, fat, ...). Why do all languages exhibit these shared features? This project aims to tackle exactly this key question by studying how languages are shaped by the human mind. In particular, I will explore how the way we learn language and use it to communicate drives the emergence of important features of lexicons, the set of all words in a language. To simulate the process of language change and evolution in the lab, I will use an experimental paradigm where an artificial language is passed between learners (language learning), and used by individuals to communicate with each other (language use). This paradigm has been successfully applied in previous research showing that key structural features of language can be explained as a consequence of repeated learning and use; my contribution will be to apply the same methods to study the evolution of the lexicon. I will then use two complementary techniques to evaluate the ecological validity of these results. First, do the artificial lexicons obtained after repeated learning and communication match the structure of lexicons found in real human languages? We will assess this by analyzing real natural language corpora using computational methods. Second, are these lexicons easily learnable by young children, the primary conduit of natural language transmission in the wild? This will be assessed using methods from developmental psychology to study word learning in toddlers. The present project requires an unprecedented integration of techniques and concepts from language evolution, computational linguistics and developmental psychology, three fields that have so far worked independently to understand the structure of language. The outcomes of the project will be of vital interest for all these communities, and will provide insights into the foundational properties found in all human languages, as well as the nature of the constraints underlying language processing and language acquisition. This project will provide a springboard for my future work at the intersection of computational and experimental approaches to language and cognitive development.</p
    corecore