16 research outputs found

    Infant cortex responds to other humans from shortly after birth

    Get PDF
    A significant feature of the adult human brain is its ability to selectively process information about conspecifics. Much debate has centred on whether this specialization is primarily a result of phylogenetic adaptation, or whether the brain acquires expertise in processing social stimuli as a result of its being born into an intensely social environment. Here we study the haemodynamic response in cortical areas of newborns (1–5 days old) while they passively viewed dynamic human or mechanical action videos. We observed activation selective to a dynamic face stimulus over bilateral posterior temporal cortex, but no activation in response to a moving human arm. This selective activation to the social stimulus correlated with age in hours over the first few days post partum. Thus, even very limited experience of face-to-face interaction with other humans may be sufficient to elicit social stimulus activation of relevant cortical regions

    The Efficacy of Teaching English as a Foreign Language to Iranian Students with Autism Spectrum Disorderon Their Social Skills and Willingness to Communicate

    Get PDF
    AbstractObjectivesThis applied research is the first practical study of teaching English as a foreign language (EFL) to students with autism spectrum disorder (ASD) in Iran. We examined the effect of a well-designed foreign language learning setting in facilitation of social skills and willingness to communicate in children with ASD.Materials & MethodsA mixed-method research design was used. Using stratified sampling, a limited sample of 18 students were chosen from Kerman Province, southeastern Iran in 2014 categorized in three levels of ASD for each group of experimental and control; matched pairs were used to ensure homogeneity of participants in two groups. Each participant received 15 sessions with totaling 67 h of language learning. First 10 sessions were in the form of tutorials and the last 5 sessions were held in the form of paired classes with a peer. Before and after the sessions, caregivers and parents completed a questionnaire on students' social skills; the English instructor also rated participants' willingness to communicate.ResultsTeaching a foreign language had a positive main effect on social skills from caregivers’ and parents’ view compared to those of controls, significantly (P<0.05). From the instructor's view, there was additionally a significant improvement in the students with ASD’s willingness to communicate in classroom settings compared to the control group (P<0.05).ConclusionOptimum foreign language pedagogy for students with ASD is applied as an effective context enhancing children’s capabilities in social skills and willingness to communicate, provoked through a motivational foreign setting modulation in a novel environment. Suggestions on enhancing joint attention during the curriculum are provided

    Human behavioural discrimination of human, chimpanzee and macaque affective vocalisations is reflected by the neural response in the superior temporal sulcus

    Get PDF
    Accurate perception of the emotional content of vocalisations is essential for successful social communication and interaction. However, it is not clear whether our ability to perceive emotional cues from vocal signals is specific to human signals, or can be applied to other species' vocalisations. Here, we address this issue by evaluating the perception and neural response to affective vocalisations from different primate species (humans, chimpanzees and macaques). We found that the ability of human participants to discriminate emotional valence varied as a function of phylogenetic distance between species. Participants were most accurate at discriminating the emotional valence of human vocalisations, followed by chimpanzee vocalisations. They were, however, unable to accurately discriminate the valence of macaque vocalisations. Next, we used fMRI to compare human brain responses to human, chimpanzee and macaque vocalisations. We found that regions in the superior temporal lobe that are closely associated with the perception of complex auditory signals, showed a graded response to affective vocalisations from different species with the largest response to human vocalisations, an intermediate response to chimpanzees, and the smallest response to macaques. Together, these results suggest that neural correlates of differences in the perception of different primate affective vocalisations are found in auditory regions of the human brain and correspond to the phylogenetic distances between the species

    Neural correlates of the affective properties of spontaneous and volitional laughter types

    Get PDF
    Previous investigations of vocal expressions of emotion have identified acoustic and perceptual distinctions between expressions of different emotion categories, and between spontaneous and volitional (or acted) variants of a given category. Recent work on laughter has identified relationships between acoustic properties of laughs and their perceived affective properties (arousal and valence) that are similar across spontaneous and volitional types (Bryant & Aktipis, 2014; Lavan et al., 2016). In the current study, we explored the neural correlates of such relationships by measuring modulations of the BOLD response in the presence of itemwise variability in the subjective affective properties of spontaneous and volitional laughter. Across all laughs, and within spontaneous and volitional sets, we consistently observed linear increases in the response of bilateral auditory cortices (including Heschl's gyrus and superior temporal gyrus [STG]) associated with higher ratings of perceived arousal, valence and authenticity. Areas in the anterior medial prefrontal cortex (amPFC) showed negative linear correlations with valence and authenticity ratings across the full set of spontaneous and volitional laughs; in line with previous research (McGettigan et al., 2015; Szameitat et al., 2010), we suggest that this reflects increased engagement of these regions in response to laughter of greater social ambiguity. Strikingly, an investigation of higher-order relationships between the entire laughter set and the neural response revealed a positive quadratic profile of the BOLD response in right-dominant STG (extending onto the dorsal bank of the STS), where this region responded most strongly to laughs rated at the extremes of the authenticity scale. While previous studies claimed a role for right STG in bipolar representation of emotional valence, we instead argue that this may in fact exhibit a relatively categorical response to emotional signals, whether positive or negative

    Is the right anterior superior temporal sulcus involved in speaker-identity recognition?: a study using transcranial direct current stimulation

    Get PDF
    Neuroimaging studies have revealed regions in the human brain that respond preferentially to human voices. These regions are mostly located along the superior temporal gyrus and sulcus (STG/S). It has been hypothesized that the right anterior STG/S is crucial for voice-identity recognition because the amplitudes of anterior STG/S neuroimaging responses correlate positively with voice-identity recognition performance. Here, my aim was to test this hypothesis by using non-invasive transcranial direct current stimulation (tdcs) in a randomized double-blind sham- controlled within-participants design. 24 neurotypical participants were familiarized with four unfamiliar speakers voices and were then tested on voice-identity and speech recognition. While performing the voice-identity and speech recognition test, participants received anodal, cathodal, and sham tdcs on three different days, respectively. As hypothesized, voice-identity recognition was improved when applying anodal tdcs to the right anterior STG/S as compared to cathodal and sham. However, this was only the case on day three. My results support the hypothesis that the right anterior STG/S is behaviourally relevant for identifying a speakers voice

    Voice and speech perception in autism : a systematic review

    Get PDF
    Autism spectrum disorders (ASD) are characterized by persistent impairments in social communication and interaction, restricted and repetitive behavior. In the original description of autism by Kanner (1943) the presence of emotional impairments was already emphasized (self-absorbed, emotionally cold, distanced, and retracted). However, little research has been conducted focusing on auditory perception of vocal emotional cues, being the audio-visual comprehension most commonly explored instead. Similarly to faces, voices play an important role in social interaction contexts in which individuals with ASD show impairments. The aim of the current systematic review was to integrate evidence from behavioral and neurobiological studies for a more comprehensive understanding of voice processing abnormalities in ASD. Among different types of information that the human voice may provide, we hypothesize particular deficits with vocal affect information processing by individuals with ASD. The relationship between vocal stimuli impairments and disrupted Theory of Mind in Autism is discussed. Moreover, because ASD are characterized by deficits in social reciprocity, further discussion of the abnormal oxytocin system in individuals with ASD is performed as a possible biological marker for abnormal vocal affect information processing and social interaction skills in ASD population

    Biological motion perception in Parkinson's disease

    Full text link
    Parkinson’s disease (PD) disrupts many aspects of visual perception, which has negative functional consequences. How PD affects perception of moving human bodies, or biological motion, is unknown. The ability to accurately perceive others’ motion is related to one’s own motor ability and depends on the integrity of brain areas affected in PD, including superior temporal sulcus and premotor cortex. Biological motion perception may therefore be compromised in PD but also provide a target for intervention, with perceptual training potentially improving motor function. Experiment 1 investigated whether perception of biological motion was impaired in PD (N=26) relative to neurologically-healthy control (NC; N=24) individuals. Participants viewed videos of point-light human figures and judged whether or not they depicted walking. As predicted, PD were less sensitive to biological motion than NC. This deficit was not associated with participants’ own walking difficulties or with other perceptual deficits (contrast sensitivity, coherent motion perception). Experiment 2 evaluated the hypothesis that PD deficits would extend to more socially-complex biological motion. PD (N=23) and NC (N=24) viewed point-light figures depicting communicative and non-communicative (object-oriented) gestures. The PD group was less accurate than NC in describing non-communicative gestures, an effect driven by PD men, who also had difficulty perceiving communicative gestures. Experiment 3 tested the efficacy of perceptual training for PD. Because biological motion perception is associated with motor function, it was hypothesized that perceptual training would improve walking. Individuals with PD were randomized to Gait Observation (N=13; viewing videos of healthy and unhealthy gait) or Landscape Observation (N=10; viewing videos of moving water) and trained daily for one week while gait data were collected with accelerometers. Post-training, only the Gait Observation group self-reported increased mobility, though improvements were not seen in objective gait data (daily activity, walking speed, stride length, stride frequency, leg swing time, gait asymmetry). These studies demonstrate that individuals with PD have difficulty perceiving biological motion (walking and socially-complex gestures). Improving biological motion perception led to enhancement in self-perceived walking ability. Perceptual training that incorporates more explicit learning over a longer time period may be required to effect objective improvements in walking.2018-12-06T00:00:00

    The Effect of Social Interaction on the Neural Correlates of Language Processing and Mentalizing

    Get PDF
    Recent behavioral and neuroscience evidence suggests that studying the social brain in detached and offline contexts (e.g., listening to prerecorded stories about characters) may not capture real-world social processes. Few studies, however, have directly compared neural activation during live interaction to conventional recorded paradigms. The current study used a novel fMRI paradigm to investigate whether real-time social interaction modulates the neural correlates of language processing and mentalizing. Regions associated with social engagement (i.e., dorsal medial prefrontal cortex) were more active during live interaction. Processing live versus recorded language increased activation in regions associated with narrative processing and mentalizing (i.e., temporal parietal junction). Regions associated with intentionality understanding (i.e., posterior superior temporal sulcus) were more active when mentalizing about a live partner. These results have implications for quantifying and understanding the neural correlates of real-world social behavior in typical adults, in developmental populations, and in individuals with social disabilities such as autism

    Developmental trajectories of social signal processing

    Get PDF
    Most of the social cognitive and affective neuroscience in the past 3 decades has focussed on the face. By contrast, the understanding of processing social cues from the body and voice have been somewhat neglected in the literature. One could argue that, from an evolutionary point of view, body recognition (and particularly emotional body perception) is more important than that of the face. It may be beneficial for survival to be able to predict another’s behaviour or emotional state from a distance, without having to rely on facial expressions. If there are relatively few cognitive and affective neuroscience studies of body and voice perception, there are even fewer on the development of these processes. In this thesis, we set out to explore the behavioural and functional developmental trajectories of body and voice processing in children, adolescents and adults using fMRI, behavioural measures, and a wide range of univariate and multivariate analytical techniques. We found, using simultaneously recorded point-light and full-light displays of affective body movements, an increase in emotion recognition ability until 8.5 years old, followed by a slower rate of accuracy improvement through adolescence into adulthood (Chapter 2). Using fMRI we show, for the first time, that the body-selective areas of the visual cortex are not yet ‘adult-like’ in children (Chapter 3). We go on to show in Chapter 4, that although the body- selective regions are still maturing in the second decade of life, there is no difference between children, adolescents and adults in the amount of emotion modulation that these regions exhibit when presented with happy or angry bodies. We also show a positive correlation between amygdala activation and amount of emotion modulation of the body-selective areas in all subjects except the adolescents. Finally, we turn our attention to the development of the voice- selective areas in the temporal cortex, finding that, contrary to face and body processing, these areas are already ‘adult-like’ in children in terms of strength and extent of activation (Chapter 5). These results are discussed in relation to current developmental literature, limitations are considered, direction for future research is given and the wider clinical application of this work is explored
    corecore