110 research outputs found
Action–effect anticipation in infant action control
There is increasing evidence that action effects play a crucial role in action understanding and action control not only in adults but also in infants. Most of the research in infants focused on the learning of action–effect contingencies or how action effects help infants to infer goals in other persons’ actions. In contrast, the present research aimed at demonstrating that infants control their own actions by action–effect anticipation once they know about specific action–effect relations. About 7 and 9-month olds observed an experimenter demonstrating two actions that differed regarding the action–effect assignment. Either a red-button press or a blue-button press or no button press elicited interesting acoustical and visual effects. The 9-month olds produced the effect action at first, with shorter latency and longer duration sustaining a direct impact of action–effect anticipation on action control. In 7-month olds the differences due to action–effect manipulation were less profound indicating developmental changes at this age
A Melodic Contour Repeatedly Experienced by Human Near-Term Fetuses Elicits a Profound Cardiac Reaction One Month after Birth
Human hearing develops progressively during the last trimester of gestation. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, and process complex auditory streams. Fetal and neonatal studies show that they can remember frequently recurring sounds. However, existing data can only show retention intervals up to several days after birth.Here we show that auditory memories can last at least six weeks. Experimental fetuses were given precisely controlled exposure to a descending piano melody twice daily during the 35(th), 36(th), and 37(th) weeks of gestation. Six weeks later we assessed the cardiac responses of 25 exposed infants and 25 naive control infants, while in quiet sleep, to the descending melody and to an ascending control piano melody. The melodies had precisely inverse contours, but similar spectra, identical duration, tempo and rhythm, thus, almost identical amplitude envelopes. All infants displayed a significant heart rate change. In exposed infants, the descending melody evoked a cardiac deceleration that was twice larger than the decelerations elicited by the ascending melody and by both melodies in control infants.Thus, 3-weeks of prenatal exposure to a specific melodic contour affects infants 'auditory processing' or perception, i.e., impacts the autonomic nervous system at least six weeks later, when infants are 1-month old. Our results extend the retention interval over which a prenatally acquired memory of a specific sound stream can be observed from 3-4 days to six weeks. The long-term memory for the descending melody is interpreted in terms of enduring neurophysiological tuning and its significance for the developmental psychobiology of attention and perception, including early speech perception, is discussed
Aerobic Exercise during Pregnancy and Presence of Fetal-Maternal Heart Rate Synchronization
It has been shown that short-term direct interaction between maternal and fetal heart rates may take place and that this interaction is affected by the rate of maternal respiration. The aim of this study was to determine the effect of maternal aerobic exercise during pregnancy on the occurrence of fetal-maternal heart rate synchronization.In 40 pregnant women at the 36th week of gestation, 21 of whom exercised regularly, we acquired 18 min. RR interval time series obtained simultaneously in the mothers and their fetuses from magnetocardiographic recordings. The time series of the two groups were examined with respect to their heart rate variability, the maternal respiratory rate and the presence of synchronization epochs as determined on the basis of synchrograms. Surrogate data were used to assess whether the occurrence of synchronization was due to chance.In the original data, we found synchronization occurred less often in pregnancies in which the mothers had exercised regularly. These subjects also displayed higher combined fetal-maternal heart rate variability and lower maternal respiratory rates. Analysis of the surrogate data showed shorter epochs of synchronization and a lack of the phase coordination found between maternal and fetal beat timing in the original data.The results suggest that fetal-maternal heart rate coupling is present but generally weak. Maternal exercise has a damping effect on its occurrence, most likely due to an increase in beat-to-beat differences, higher vagal tone and slower breathing rates
Language experience impacts brain activation for spoken and signed language in infancy: Insights from unimodal and bimodal bilinguals
Recent neuroimaging studies suggest that monolingual infants activate a left lateralised fronto-temporal brain network in response to spoken language, which is similar to the network involved in processing spoken and signed language in adulthood. However, it is unclear how brain activation to language is influenced by early experience in infancy. To address this question, we present functional near infrared spectroscopy (fNIRS) data from 60 hearing infants (4-to-8 months): 19 monolingual infants exposed to English, 20 unimodal bilingual infants exposed to two spoken languages, and 21 bimodal bilingual infants exposed to English and British Sign Language (BSL). Across all infants, spoken language elicited activation in a bilateral brain network including the inferior frontal and posterior temporal areas, while sign language elicited activation in the right temporo-parietal area. A significant difference in brain lateralisation was observed between groups. Activation in the posterior temporal region was not lateralised in monolinguals and bimodal bilinguals, but right lateralised in response to both language modalities in unimodal bilinguals. This suggests that experience of two spoken languages influences brain activation for sign language when experienced for the first time. Multivariate pattern analyses (MVPA) could classify distributed patterns of activation within the left hemisphere for spoken and signed language in monolinguals (proportion correct = 0.68; p = 0.039) but not in unimodal or bimodal bilinguals. These results suggest that bilingual experience in infancy influences brain activation for language, and that unimodal bilingual experience has greater impact on early brain lateralisation than bimodal bilingual experience
Neonates’ responses to repeated exposure to a still face
The main aims of the study were to examine whether human neonates' responses to communication disturbance modelled by the still-face paradigm were stable and whether their responses were affected by their previous experience with the still-face paradigm.The still face procedure, as a laboratory model of interpersonal stress, was administered repeatedly, twice, to 84 neonates (0 to 4 day olds), with a delay of an average of 1.25 day.Frame-by-frame analysis of the frequency and duration of gaze, distressed face, crying, sleeping and sucking behaviours showed that the procedure was stressful to them both times, that is, the still face effect was stable after repeated administration and newborns consistently responded to such nonverbal violation of communication. They averted their gaze, showed distress and cried more during the still-face phase in both the first and the second administration. They also showed a carry-over effect in that they continued to avert their gaze and displayed increased distress and crying in the first reunion period, but their gaze behaviour changed with experience, in the second administration. While in the first administration the babies continued averting their gaze even after the stressful still-face phase was over, this carry-over effect disappeared in the second administration, and the babies significantly increased their gaze following the still-face phase.After excluding explanations of fatigue, habituation and random effects, a self-other regulatory model is discussed as a possible explanation for this pattern
The evolution of language: a comparative review
For many years the evolution of language has been seen as a disreputable topic, mired in fanciful "just so stories" about language origins. However, in the last decade a new synthesis of modern linguistics, cognitive neuroscience and neo-Darwinian evolutionary theory has begun to make important contributions to our understanding of the biology and evolution of language. I review some of this recent progress, focusing on the value of the comparative method, which uses data from animal species to draw inferences about language evolution. Discussing speech first, I show how data concerning a wide variety of species, from monkeys to birds, can increase our understanding of the anatomical and neural mechanisms underlying human spoken language, and how bird and whale song provide insights into the ultimate evolutionary function of language. I discuss the ‘‘descended larynx’ ’ of humans, a peculiar adaptation for speech that has received much attention in the past, which despite earlier claims is not uniquely human. Then I will turn to the neural mechanisms underlying spoken language, pointing out the difficulties animals apparently experience in perceiving hierarchical structure in sounds, and stressing the importance of vocal imitation in the evolution of a spoken language. Turning to ultimate function, I suggest that communication among kin (especially between parents and offspring) played a crucial but neglected role in driving language evolution. Finally, I briefly discuss phylogeny, discussing hypotheses that offer plausible routes to human language from a non-linguistic chimp-like ancestor. I conclude that comparative data from living animals will be key to developing a richer, more interdisciplinary understanding of our most distinctively human trait: language
- …