16,813 research outputs found

    Maternal label and gesture use affects acquisition of specific object names

    Get PDF
    Ten mothers were observed prospectively, interacting with their infants aged 0 ; 10 in two contexts (picture description and noun description). Maternal communicative behaviours were coded for volubility, gestural production and labelling style. Verbal labelling events were categorized into three exclusive categories: label only; label plus deictic gesture; label plus iconic gesture. We evaluated the predictive relations between maternal communicative style and children's subsequent acquisition of ten target nouns. Strong relations were observed between maternal communicative style and children's acquisition of the target nouns. Further, even controlling for maternal volubility and maternal labelling, maternal use of iconic gestures predicted the timing of acquisition of nouns in comprehension. These results support the proposition that maternal gestural input facilitates linguistic development, and suggest that such facilitation may be a function of gesture type

    How to Do Things Without Words: Infants, utterance-activity and distributed cognition

    Get PDF
    Clark and Chalmers (1998) defend the hypothesis of an ‘Extended Mind’, maintaining that beliefs and other paradigmatic mental states can be implemented outside the central nervous system or body. Aspects of the problem of ‘language acquisition’ are considered in the light of the extended mind hypothesis. Rather than ‘language’ as typically understood, the object of study is something called ‘utterance-activity’, a term of art intended to refer to the full range of kinetic and prosodic features of the on-line behaviour of interacting humans. It is argued that utterance activity is plausibly regarded as jointly controlled by the embodied activity of interacting people, and that it contributes to the control of their behaviour. By means of specific examples it is suggested that this complex joint control facilitates easier learning of at least some features of language. This in turn suggests a striking form of the extended mind, in which infants’ cognitive powers are augmented by those of the people with whom they interact

    How to do things without words

    Get PDF
    Clark and Chalmers (1998) defend the hypothesis of an ‘Extended Mind’, maintaining that beliefs and other paradigmatic mental states can be implemented outside the central nervous system or body. Aspects of the problem of ‘language acquisition’ are considered in the light of the extended mind hypothesis. Rather than ‘language’ as typically understood, the object of study is something called ‘utterance-activity’, a term of art intended to refer to the full range of kinetic and prosodic features of the on-line behaviour of interacting humans. It is argued that utterance activity is plausibly regarded as jointly controlled by the embodied activity of interacting people, and that it contributes to the control of their behaviour. By means of specific examples it is suggested that this complex joint control facilitates easier learning of at least some features of language. This in turn suggests a striking form of the extended mind, in which infants’ cognitive powers are augmented by those of the people with whom they interact

    The road to language learning is not entirely iconic: Iconicity, neighborhood density, and frequency facilitate sign language acquisition

    Full text link
    Iconic mappings between words and their meanings are far more prevalent than once estimated, and seem to support children’s acquisition of new words, spoken or signed. We asked whether iconicity’s prevalence in sign language overshadows other factors known to support spoken vocabulary development, including neighborhood density (the number of lexical items phonologically similar to the target), and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children’s American Sign Language (ASL) productive acquisition of 332 signs (Anderson & Reilly, 2002), and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage them to expand their vocabulary.Research reported in this publication was supported by the National Institute On Deafness And Other Communication Disorders of the National Institutes of Health under Award Number R21DC016104. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This work is also supported by a James S. McDonnell Foundation Award to Dr. Jennie Pyers

    Mind, Cognition, Semiosis: Ways to Cognitive Semiotics

    Get PDF
    What is meaning-making? How do new domains of meanings emerge in the course of child’s development? What is the role of consciousness in this process? What is the difference between making sense of pointing, pantomime and language utterances? Are great apes capable of meaning-making? What about dogs? Parrots? Can we, in any way, relate their functioning and behavior to a child’s? Are artificial systems capable of meaning-making? The above questions motivated the emergence of cognitive semiotics as a discipline devoted to theoretical and empirical studies of meaning-making processes. As a transdisciplinary approach to meaning and meaning-making, cognitive semiotics necessarily draws on a different disciplines: starting with philosophy of mind, via semiotics and linguistics, cognitive science(s), neuroanthropology, developmental and evolutionary psychology, comparative studies, and ending with robotics. The book presents extensively this discipline. It is a very eclectic story: highly abstract problems of philosophy of mind are discussed and, simultaneously, results of very specific experiments on picture recognition are presented. On the one hand, intentional acts involved in semiotic activity are elaborated; on the other, a computational system capable of a limited interpretation of excerpts from Carroll’s Through the Looking-Glass is described. Specifically, the two roads to cognitive semiotics are explored in the book: phenomenological-enactive path developed by the so-called Lund school and author’s own proposal: a functional-cognitivist path

    How Gesture Input Provides a Helping Hand to Language Development

    Get PDF
    Children use gesture to refer to objects before they produce labels for these objects and gesture–speech combinations to convey semantic relations between objects before conveying sentences in speech—a trajectory that remains largely intact across children with different developmental profiles. Can the developmental changes that we observe in children be traced back to the gestural input that children receive from their parents? A review of previous work shows that parents provide models for their children for the types of gestures and gesture–speech combinations to produce, and do so by modifying their gestures to meet the communicative needs of their children. More importantly, the gestures that parents produce, in addition to providing models, help children learn labels for referents and semantic relations between these referents and even predict the extent of children\u27s vocabularies several years later. The existing research thus highlights the important role parental gestures play in shaping children\u27s language learning trajectory

    How Child Gestures Relates To Parent Gesture Input in Older Children with Autism and Typical Development

    Get PDF
    Young children with autism spectrum disorder (ASD) differ from typically developing (TD) children in their overall production of gesture, producing fewer deictic gestures and supplemental gesture-speech combinations. In this study, we ask whether older children with ASD continue to differ from TD children in the types of gestures and gesture-speech combinations they produce, and whether this reflects differences in parental gesture input. Our study examined the gestures and speech produced by 42 children (20 ASD, 22 TD), comparable in expressive vocabulary, and their parents, and showed that children with ASD were similar to TD children in the amount and types of gestures that they produced, but differed in their gesture-speech combinations, using gesture primarily to complement their speech. Parents, however, did not show the same group differences in their gesture-speech combinations, suggesting that differences observed in children’s gesture use may not reflect parental input, but rather the child’s communicative needs

    Effect of Sex and Dyad Composition on Speech and Gesture Development of Singleton and Twin Children

    Get PDF
    Children show sex differences in early vocabulary development—with boys having smaller vocabularies than age-comparable girls—a pattern that becomes evident in both singleton and twin dyads. Twins also use fewer words than their singleton peers. However, we know relatively less about sex differences in early gesturing in singletons and twins, except for a few studies suggesting a female advantage in gesturing among singletons. We examine the patterns of speech and gesture production of 1;6-to 2;0-year-old singletons and twins in structured play interactions with their parents. Boys and girls were comparable in their speech and gesture production, but singletons used greater amount and diversity of speech and gestures than twins. There was, however, no effect of twin dyad type on either speech or gesture production. These results further confirm the close integration between gesture and speech at the early stages of language development in twins

    Directional adposition use in English, Swedish and Finnish

    Get PDF
    Directional adpositions such as to the left of describe where a Figure is in relation to a Ground. English and Swedish directional adpositions refer to the location of a Figure in relation to a Ground, whether both are static or in motion. In contrast, the Finnish directional adpositions edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) solely describe the location of a moving Figure in relation to a moving Ground (Nikanne, 2003). When using directional adpositions, a frame of reference must be assumed for interpreting the meaning of directional adpositions. For example, the meaning of to the left of in English can be based on a relative (speaker or listener based) reference frame or an intrinsic (object based) reference frame (Levinson, 1996). When a Figure and a Ground are both in motion, it is possible for a Figure to be described as being behind or in front of the Ground, even if neither have intrinsic features. As shown by Walker (in preparation), there are good reasons to assume that in the latter case a motion based reference frame is involved. This means that if Finnish speakers would use edellĂ€ (in front of) and jĂ€ljessĂ€ (behind) more frequently in situations where both the Figure and Ground are in motion, a difference in reference frame use between Finnish on one hand and English and Swedish on the other could be expected. We asked native English, Swedish and Finnish speakers’ to select adpositions from a language specific list to describe the location of a Figure relative to a Ground when both were shown to be moving on a computer screen. We were interested in any differences between Finnish, English and Swedish speakers. All languages showed a predominant use of directional spatial adpositions referring to the lexical concepts TO THE LEFT OF, TO THE RIGHT OF, ABOVE and BELOW. There were no differences between the languages in directional adpositions use or reference frame use, including reference frame use based on motion. We conclude that despite differences in the grammars of the languages involved, and potential differences in reference frame system use, the three languages investigated encode Figure location in relation to Ground location in a similar way when both are in motion. Levinson, S. C. (1996). Frames of reference and Molyneux’s question: Crosslingiuistic evidence. In P. Bloom, M.A. Peterson, L. Nadel & M.F. Garrett (Eds.) Language and Space (pp.109-170). Massachusetts: MIT Press. Nikanne, U. (2003). How Finnish postpositions see the axis system. In E. van der Zee & J. Slack (Eds.), Representing direction in language and space. Oxford, UK: Oxford University Press. Walker, C. (in preparation). Motion encoding in language, the use of spatial locatives in a motion context. Unpublished doctoral dissertation, University of Lincoln, Lincoln. United Kingdo

    Effect of Child Sex and Sibling Composition on Parental Verbal and Nonverbal Input

    Get PDF
    Children show differences in the way they speak and gesture.Parentsalso show variability in the way they produce speech when interacting with their singleton sons vs. daughters—a pattern that we do not yet know extend to boy-boy vs. girl-girl twins. In this study, we ask whether there is evidence of sex (girls vs. boys) or group (singletons vs. twins) differences in parents’ speech and gesture production, and whether these differences also become evident in different twin dyads (girl-girl, boy-boy, girl-boy) difference? Our results largely showed no evidence of a sex or dyad-composition difference in either parent speech or gesture, but evidence of a group difference in gesture, with the parents of singletons providing greater amount, diversity, and complexity of gestures than parents of twins in one-on-one interactions. These results suggest that differences in parent input to singletons vs. twins might become evident initially in gesture
    • 

    corecore