587 research outputs found

    Labels constructively shape object categories in 10-month-old infants

    Get PDF
    How do infants' emerging language abilities affect their organization of objects into categories? The question of whether labels can shape the early perceptual categories formed by young infants has received considerable attention, but evidence has remained inconclusive. Here, 10-month-old infants (N = 80) were familiarized with a series of morphed stimuli along a continuum that can be seen as either one category or two categories. Infants formed one category when the stimuli were presented in silence or paired with the same label, but they divided the stimulus set into two categories when half of the stimuli were paired with one label and half with another label. Pairing the stimuli with two different nonlinguistic sounds did not lead to the same result. In this case, infants showed evidence for the formation of a single category, indicating that nonlinguistic sounds do not cause infants to divide a category. These results suggest that labels and visual perceptual information interact in category formation, with labels having the potential to constructively shape category structures already in preverbal infants, and that nonlinguistic sounds do not have the same effect

    Pupillometry registers toddlers’ sensitivity to degrees of mispronunciation

    Get PDF
    AbstractThis study introduces a method ideally suited for investigating toddlers’ ability to detect mispronunciations in lexical representations: pupillometry. Previous research has established that the magnitude of pupil dilation reflects differing levels of cognitive effort. Building on those findings, we use pupil dilation to study the level of detail encoded in lexical representations with 30-month-old children whose lexicons allow for a featurally balanced stimulus set. In each trial, we present a picture followed by a corresponding auditory label. By systematically manipulating the number of feature changes in the onset of the label (e.g., baby∼daby∼faby∼shaby), we tested whether featural distance predicts the degree of pupil dilation. Our findings support the existence of a relationship between featural distance and pupil dilation. First, mispronounced words are associated with a larger degree of dilation than correct forms. Second, words that deviate more from the correct form are related to a larger dilation than words that deviate less. This pattern indicates that toddlers are sensitive to the degree of mispronunciation, and as such it corroborates previous work that found word recognition modulated by sub-segmental detail and by the degree of mismatch. Thus, we establish that pupillometry provides a viable alternative to paradigms that require overt behavioral response in increasing our understanding of the development of lexical representations

    Posture Affects How Robots and Infants Map Words to Objects

    Get PDF
    For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space

    The semantic link: action & language:an investigation of relations between different cognitive domains in early development

    Get PDF
    The processing of language may derive, in part, from the ability to make sense of others’ actions or gestures. Therefore, in order to understand the emergence of cognitive structures supporting language, an understanding is required of the basic aspects of semantic expectations related to goal-directed actions early in development. These expectations can act as a scaffold for later language development, with implications for vocabulary development and language comprehension. The relations of action and language have, however, not been fully and systematically explored - especially in terms of semantics. The aim of this thesis is to build knowledge in how language derives from understanding the content of action. The emphasis has been placed on semantics, with examinations utilising multiple approaches. We designed three targeted studies that looked at different aspects of age in order to index how we think language and action will interact at those time points. In order to shed light on to specific cognitive processes and organizational changes of brain activity in relation to semantics, we made use of a combination of neurophysiological methods, primarily event-related brain potentials (ERP’s) and event-related oscillations (ERO’s). In this context, semantic processing represents a specific application of a more general process namely that of identifying whenever something we perceive matches the predictions we have or not, based on context. For the capacity to distinguish perceived mismatches, one needs to be able to translate perceived information into meaningful concepts that are built on past and individual experiences. Neural activation that is related to semantics could be reflected in different areas across the brain via different mechanisms. In Chapter 1, the literature on infant language development, action understanding, and possible links between the two cognitive domains is reviewed and linked to semantics. Further, the objectives of this thesis are described. In Chapter 2, the semantic processing of actions at 9 months and how this processing ability may be linked to language proficiency at 9 and 18 months was investigated. In Chapter 3, the semantic representation of newly acquired words in 10-11-month-olds was measured. In Chapter 4, 2-year-olds modulation of motor systems was examined before and after the acquisition of new actions, verbs and sounds. The results of these experiments show that semantics is interwoven in action processing and in language. The implications of the results for understanding action processing in development and its relation to language are considered in Chapter 5

    Brain Signatures of Embodied Semantics and Language: A Consensus Paper

    Get PDF
    According to embodied theories (including embodied, embedded, extended, enacted, situated, and grounded approaches to cognition), language representation is intrinsically linked to our interactions with the world around us, which is reflected in specific brain signatures during language processing and learning. Moving on from the original rivalry of embodied vs. amodal theories, this consensus paper addresses a series of carefully selected questions that aim at determining when and how rather than whether motor and perceptual processes are involved in language processes. We cover a wide range of research areas, from the neurophysiological signatures of embodied semantics, e.g., event-related potentials and fields as well as neural oscillations, to semantic processing and semantic priming effects on concrete and abstract words, to first and second language learning and, finally, the use of virtual reality for examining embodied semantics. Our common aim is to better understand the role of motor and perceptual processes in language representation as indexed by language comprehension and learning. We come to the consensus that, based on seminal research conducted in the field, future directions now call for enhancing the external validity of findings by acknowledging the multimodality, multidimensionality, flexibility and idiosyncrasy of embodied and situated language and semantic processes

    How children make sense of the world:A perceptual learning account

    Get PDF
    The aim of this thesis was to describe the social and non-social development of infants and children from a perceptual learning account. Perceptual learning is a domain-general process by which children progressively can distinguish more diverse and relevant information in the world around them. This then allows children to couple perception and action in novel and adaptive ways, helping them to meet the demands and opportunities provided by that world. It is claimed that this does not require the mediation of any specialized cognitive functions, something that is usually either explicit or implicitly acclaimed in theories of development. This thesis reports several studies that show how infants and children develop social and non-social skills by means of perceptual learning, such as gaze following, specific forms of imitation, facial expression recognition, and understanding of physical mechanisms. As an example, it was shown that infants are able to perceive another person’s intended actions with objects by perceptually tuning into the sequence of events of how he or she interacts with those objects. In another reported study, it was shown that children’s interaction with physical mechanisms can create the perceptual information that helps them to understand those physical mechanisms in an advanced manner. For the future, it was suggested that perceptual learning should be used to further investigate early social development and learning contexts for children. This could lead to new insights on how to describe development and learning processes and to optimize contexts for learning to occur

    Learning together or learning alone: Investigating the role of social interaction in second language word learning

    Get PDF
    • …
    corecore