382 research outputs found

    Constraining generalisation in artificial language learning : children are rational too

    Get PDF
    Successful language acquisition involves generalization, but learners must balance this against the acquisition of lexical constraints. Examples occur throughout language. For example, English native speakers know that certain noun-adjective combinations are impermissible (e.g. strong winds, high winds, strong breezes, *high breezes). Another example is the restrictions imposed by verb subcategorization, (e.g. I gave/sent/threw the ball to him; I gave/sent/threw him the ball; donated/carried/pushed the ball to him; * I donated/carried/pushed him the ball). Such lexical exceptions have been considered problematic for acquisition: if learners generalize abstract patterns to new words, how do they learn that certain specific combinations are restricted? (Baker, 1979). Certain researchers have proposed domain-specific procedures (e.g. Pinker, 1989 resolves verb subcategorization in terms of subtle semantic distinctions). An alternative approach is that learners are sensitive to distributional statistics and use this information to make inferences about when generalization is appropriate (Braine, 1971). A series of Artificial Language Learning experiments have demonstrated that adult learners can utilize statistical information in a rational manner when determining constraints on verb argument-structure generalization (Wonnacott, Newport & Tanenhaus, 2008). The current work extends these findings to children in a different linguistic domain (learning relationships between nouns and particles). We also demonstrate computationally that these results are consistent with the predictions of domain-general hierarchical Bayesian model (cf. Kemp, Perfors & Tenebaum, 2007)

    Variability, negative evidence, and the acquisition of verb argument constructions

    Get PDF
    We present a hierarchical Bayesian framework for modeling the acquisition of verb argument constructions. It embodies a domain-general approach to learning higher-level knowledge in the form of inductive constraints (or overhypotheses), and has been used to explain other aspects of language development such as the shape bias in learning object names. Here, we demonstrate that the same model captures several phenomena in the acquisition of verb constructions. Our model, like adults in a series of artificial language learning experiments, makes inferences about the distributional statistics of verbs on several levels of abstraction simultaneously. It also produces the qualitative learning patterns displayed by children over the time course of acquisition. These results suggest that the patterns of generalization observed in both children and adults could emerge from basic assumptions about the nature of learning. They also provide an example of a broad class of computational approaches that can resolve Baker's Paradox

    Higher order inference in verb argument structure acquisition

    Get PDF
    Successful language learning combines generalization and the acquisition of lexical constraints. The conflict is particularly clear for verb argument structures, which may generalize to new verbs (John gorped the ball to Bill ->John gorped Bill the ball), yet resist generalization with certain lexical items (John carried the ball to Bill -> *John carried Bill the ball). The resulting learnability “paradox” (Baker 1979) has received great attention in the acquisition literature. Wonnacott, Newport & Tanenhaus 2008 demonstrated that adult learners acquire both general and verb-specific patterns when acquiring an artificial language with two competing argument structures, and that these same constraints are reflected in real time processing. The current work follows up and extends this program of research in two new experiments. We demonstrate that the results are consistent with a hierarchical Bayesian model, originally developed by Kemp, Perfors & Tenebaum (2007) to capture the emergence of feature biases in word learning

    The role of stimulus‐specific perceptual fluency in statistical learning

    Get PDF
    Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here, we present experimental work suggesting that at least some individual differences arise from stimulus-specific variation in perceptual fluency: the ability to rapidly or efficiently code and remember the stimuli that SL occurs over. Experiment 1 demonstrates that participants show improved SL when the stimuli are simple and familiar; Experiment 2 shows that this improvement is not evident for simple but unfamiliar stimuli; and Experiment 3 shows that for the same stimuli (Chinese characters), SL is higher for people who are familiar with them (Chinese speakers) than those who are not (English speakers matched on age and education level). Overall, our findings indicate that performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, test–retest correlations of performance in an SL task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with stimuli of different levels of familiarity. Finally, we demonstrate that SL performance is predicted by an independent measure of stimulus-specific perceptual fluency that contains no SL component at all. Our results suggest that a key component of SL performance may be related to stimulus-specific processing and familiarity

    Economie en cultuur

    Get PDF

    When do memory limitations lead to regularization? An experimental and computational investigation

    Get PDF
    The Less is More hypothesis suggests that one reason adults and children differ in their ability to learn language is that they also differ in other cognitive capacities. According to one version of this hypothesis, children's relatively poor memory may make them more likely to regularize inconsistent input (Hudson Kam & Newport, 2005, 2009). This paper reports the result of an experimental and computational investigation of one aspect of this version of the hypothesis. A series of seven experiments in which adults were placed under a high cognitive load during a language-learning task reveal that in adults, increased load during learning (as opposed to retrieval) does not result in increased regularization. A computational model offers a possible explanation for these results. It demonstrates that, unless memory limitations distort the data in a particular way, regularization should occur only in the presence of both memory limitations and a prior bias for regularization. Taken together, these findings suggest that the difference in regularization between adults and children may not be solely attributable to differences in memory limitations during learning.Amy Perfor

    Probability matching vs over-regularization in language: participant behavior depends on their interpretation of the task

    Get PDF
    In a variety of domains, children have been observed to overregularize inconsistent input, while adults are more likely to “probability match” to any inconsistency. Many explanations for this have been offered, usually relating to cognitive differences between children and adults. Here we explore an additional possibility: that differences in the social assumptions participants bring to the experiment can drive differences in over-regularization behavior. We explore this in the domain of language, where assumptions about error and communicative purpose might have a large effect. Indeed, we find that participants who experience less pressure to be “correct” and who have more reason to believe that any inconsistencies do not correspond to an underlying regularity do over-regularize more. Implications for language acquisition in children and adults are discussed.Amy Perfor

    A cognitive analysis of deception without lying

    Get PDF
    When the interests of interlocutors are not aligned, either party may wish to avoid truthful disclosure. A sender wishing to conceal the truth from a receiver may lie by providing false information, mislead by actively encouraging the receiver to reach a false conclusion, or simply be uninformative by providing little or no relevant information. Lying entails moral and other hazards, such as detection and its consequences, and is thus often avoided. We focus here on the latter two strategies, arguably more pernicious and prevalent, but not without their own drawbacks. We argue and show in two studies that when choosing between these options, senders consider the level of suspicion likely to be exercised on the part of the receiver and how much truth must be revealed in order to mislead. Extending Bayesian models of cooperative communication to include higher level inference regarding the helpfulness of the sender leads to insight into the strategies employed in non-cooperative contexts.Keith Ransom, Wouter Voorspoels, Amy Perfors, Daniel J. Navarr

    Inductive reasoning in humans and large language models

    Full text link
    The impressive recent performance of large language models has led many to wonder to what extent they can serve as models of general intelligence or are similar to human cognition. We address this issue by applying GPT-3.5 and GPT-4 to a classic problem in human inductive reasoning known as property induction. Over two experiments, we elicit human judgments on a range of property induction tasks spanning multiple domains. Although GPT-3.5 struggles to capture many aspects of human behaviour, GPT-4 is much more successful: for the most part, its performance qualitatively matches that of humans, and the only notable exception is its failure to capture the phenomenon of premise non-monotonicity. Our work demonstrates that property induction allows for interesting comparisons between human and machine intelligence and provides two large datasets that can serve as benchmarks for future work in this vein.Comment: 61 pages, 5 figure

    Désir et affects incarnés auprès de migrants homosexuels mexicains

    Get PDF
    La resignification des repères corporels, sociaux, ethniques et sexuels d’un groupe de Mexicains homosexuels à Paris au long de leurs trajectoires de mobilité, ainsi que le rôle des affects et du désir dans l’incorporation de ces déplacements fait l’objet de ce travail. Le déplacement vers une société avec des notions de la sexualité et de la race différentes mène à un (auto) reclassement, avec des résultats souvent inattendus. Un problème commun évoqué par les enquêtés est la tension entre différents types d’exclusion dans leur société d’origine et dans leur ville d’accueil. Cet article reflète le progrès d’une thèse de doctorat en cours, avec une approche (auto)ethnographique, réflexive, féministe.La resignificación de las coordenadas corporales, sociales, étnicas y sexuales de un grupo de mexicanos homosexuales en París, a lo largo de sus trayectorias de movilidad, así como el papel de los afectos y el deseo en la incorporación de dichos desplazamientos, forman el objeto de este trabajo. El desplazo a una sociedad con nociones de la sexualidad y la raza diferentes lleva a una (auto) reclasificación, con resultados a veces inesperados, Un problema común evocado por los informantes es la tensión entre diferentes tipos de exclusión, tanto en su sociedad de origen como en su ciudad destino. Este artículo refleja el progreso de una tesis doctoral en curso, y se apoya en un enfoque (auto)etnográfico, reflexivo y feminista
    corecore