Competition and Symmetry in an Artificial Word Learning Task, 2016-2019

Abstract

Natural language involves competition. The sentences we choose to utter activate alternative sentences (those we chose not to utter), which hearers typically infer to be false. Hence, as a first approximation, the more alternatives a sentence activates, the more inferences it will trigger. But a closer look at the theory of competition shows that this is not quite true and that under specific circumstances, so-called symmetric alternatives cancel each other out. We present an artificial word learning experiment in which participants learn words that may enter into competition with one another. The results show that a mechanism of competition takes place, and that the subtle prediction that alternatives trigger inferences, and may stop triggering them after a point due to symmetry, is borne out. This study provides a minimal testing paradigm to reveal competition and some of its subtle characteristics in human languages and beyond.As anyone who has learnt a foreign language or travelled abroad will have noticed, languages differ in the sounds they employ, the names they give to things, and the rules of grammar. However, linguists have long observed that, beneath this surface diversity, all human languages share a number of fundamental structural similarities. Most obviously, all languages use sounds, all languages have words, and all languages have a grammar. More subtly and more surprisingly, similarities can also be observed in more fine-grained linguistic features: for instance, George Zipf famously observed that, across multiple languages, short words tend also to be more frequent, and in my own recent work I have shown that languages prefer to use words that sound alike (e.g., cat, mat, rat, bat, fat, ...). Why do all languages exhibit these shared features? This project aims to tackle exactly this key question by studying how languages are shaped by the human mind. In particular, I will explore how the way we learn language and use it to communicate drives the emergence of important features of lexicons, the set of all words in a language. To simulate the process of language change and evolution in the lab, I will use an experimental paradigm where an artificial language is passed between learners (language learning), and used by individuals to communicate with each other (language use). This paradigm has been successfully applied in previous research showing that key structural features of language can be explained as a consequence of repeated learning and use; my contribution will be to apply the same methods to study the evolution of the lexicon. I will then use two complementary techniques to evaluate the ecological validity of these results. First, do the artificial lexicons obtained after repeated learning and communication match the structure of lexicons found in real human languages? We will assess this by analyzing real natural language corpora using computational methods. Second, are these lexicons easily learnable by young children, the primary conduit of natural language transmission in the wild? This will be assessed using methods from developmental psychology to study word learning in toddlers. The present project requires an unprecedented integration of techniques and concepts from language evolution, computational linguistics and developmental psychology, three fields that have so far worked independently to understand the structure of language. The outcomes of the project will be of vital interest for all these communities, and will provide insights into the foundational properties found in all human languages, as well as the nature of the constraints underlying language processing and language acquisition. This project will provide a springboard for my future work at the intersection of computational and experimental approaches to language and cognitive development.</p

    Similar works