897 research outputs found

    TMS demonstrates that both right and left superior temporal sulci are important for facial expression recognition

    Get PDF
    Prior studies demonstrate that a face-responsive region in the posterior superior temporal sulcus (pSTS) is involved in facial expression recognition. Although this region can be identified in both hemispheres, studies more commonly report it in the right hemisphere. However, the extent to which expression recognition is lateralised in pSTS remains unclear. In the current study, we used transcranial magnetic stimulation (TMS) to systematically compare the causal contribution of the right pSTS (rpSTS) with the left pSTS (lpSTS) during facial expression recognition. TMS was delivered over the functionally localised rpSTS, lpSTS and the control vertex site while participants (N = 30) performed an expression matching task and a control object matching task. TMS delivered over the rpSTS impaired expression recognition more than TMS delivered over the lpSTS. Crucially, TMS delivered over the rpSTS and lpSTS impaired task performance more than TMS delivered over the control site. TMS had no effect on the control task. This causally demonstrates that while task disruption was greater in the rpSTS, both the rpSTS and the lpSTS were engaged in facial expression recognition. Our results indicate that cognitive functions that are seemingly lateralised in neuroimaging studies, still rely on computations performed in both hemispheres for optimum task performance

    People-selectivity, audiovisual integration and heteromodality in the superior temporal sulcus

    Get PDF
    The functional role of the superior temporal sulcus (STS) has been implicated in a number of studies, including those investigating face perception, voice perception, and face–voice integration. However, the nature of the STS preference for these ‘social stimuli’ remains unclear, as does the location within the STS for specific types of information processing. The aim of this study was to directly examine properties of the STS in terms of selective response to social stimuli. We used functional magnetic resonance imaging (fMRI) to scan participants whilst they were presented with auditory, visual, or audiovisual stimuli of people or objects, with the intention of localising areas preferring both faces and voices (i.e., ‘people-selective’ regions) and audiovisual regions designed to specifically integrate person-related information. Results highlighted a ‘people-selective, heteromodal’ region in the trunk of the right STS which was activated by both faces and voices, and a restricted portion of the right posterior STS (pSTS) with an integrative preference for information from people, as compared to objects. These results point towards the dedicated role of the STS as a ‘social-information processing’ centre

    Integration of gaze direction and facial expression in patients with unilateral amygdala damage

    Get PDF
    Affective and social processes play a major role in everyday life, but appropriate methods to assess disturbances in these processes after brain lesions are still lacking. Past studies have shown that amygdala damage can impair recognition of facial expressions, particularly fear, as well as processing of gaze direction; but the mechanisms responsible for these deficits remain debated. Recent accounts of human amygdala function suggest that it is a critical structure involved in self-relevance appraisal. According to such accounts, responses to a given facial expression may vary depending on concomitant gaze direction and perceived social meaning. Here we investigated facial emotion recognition and its interaction with gaze in patients with unilateral amygdala damage (n = 19), compared to healthy controls (n = 10), using computer-generated dynamic face stimuli expressing variable intensities of fear, anger or joy, with different gaze directions (direct versus averted). If emotion perception is influenced by the self-relevance of expression based on gaze direction, a fearful face with averted gaze should be more relevant than the same expression with direct gaze because it signals danger near the observer; whereas anger with direct gaze should be more relevant than with averted gaze because it directly threatens the observer. Our results confirm a critical role for the amygdala in self-relevance appraisal, showing an interaction between gaze and emotion in healthy controls, a trend for such interaction in left-damaged patients but not in right-damaged patients. Impaired expression recognition was generally more severe for fear, but with a greater deficit for right versus left damage. These findings do not only provide new insights on human amygdala function, but may also help design novel neuropsychological tests sensitive to amygdala dysfunction in various patient population

    Neural correlates of facial motion perception

    Get PDF
    Several neuroimaging studies have revealed that the superior temporal sulcus (STS) is highly implicated in the processing of facial motion. A limitation of these investigations, however, is that many of them utilize unnatural stimuli (e.g., morphed videos) or those which contain many confounding spatial cues. As a result, the underlying mechanisms may not be fully engaged during such perception. The aim of the current study was to build upon the existing literature by implementing highly detailed and accurate models of facial movement. Accordingly, neurologically healthy participants viewed simultaneous sequences of rigid and nonrigid motion that was retargeted onto a standard computer generated imagery face model. Their task was to discriminate between different facial motion videos in a two-alternative forced choice paradigm. Presentations varied between upright and inverted orientations. In corroboration with previous data, the perception of natural facial motion strongly activated a portion of the posterior STS. The analysis also revealed engagement of the lingual gyrus, fusiform gyrus, precentral gyrus, and cerebellum. These findings therefore suggest that the processing of dynamic facial information is supported by a network of visuomotor substrates

    Neural Univariate Activity and Multivariate Pattern in the Posterior Superior Temporal Sulcus Differentially Encode Facial Expression and Identity

    No full text
    Faces contain a variety of information such as one’s identity and expression. One prevailing model suggests a functional division of labor in processing faces that different aspects of facial information are processed in anatomically separated and functionally encapsulated brain regions. Here, we demonstrate that facial identity and expression can be processed in the same region, yet with different neural coding strategies. To this end, we employed functional magnetic resonance imaging to examine two types of coding schemes, namely univariate activity and multivariate pattern, in the posterior superior temporal cortex (pSTS) - a face-selective region that is traditionally viewed as being specialized for processing facial expression. With the individual difference approach, we found that participants with higher overall face selectivity in the right pSTS were better at differentiating facial expressions measured outside of the scanner. In contrast, individuals whose spatial pattern for faces in the right pSTS was less similar to that for objects were more accurate in identifying previously presented faces. The double dissociation of behavioral relevance between overall neural activity and spatial neural pattern suggests that the functional-division-of-labor model on face processing is over-simplified, and that coding strategies shall be incorporated in a revised model

    You talkin' to me? Communicative talker gaze activates left-lateralized superior temporal cortex during perception of degraded speech.

    Get PDF
    Neuroimaging studies of speech perception have consistently indicated a left-hemisphere dominance in the temporal lobes' responses to intelligible auditory speech signals (McGettigan and Scott, 2012). However, there are important communicative cues that cannot be extracted from auditory signals alone, including the direction of the talker's gaze. Previous work has implicated the superior temporal cortices in processing gaze direction, with evidence for predominantly right-lateralized responses (Carlin & Calder, 2013). The aim of the current study was to investigate whether the lateralization of responses to talker gaze differs in an auditory communicative context. Participants in a functional MRI experiment watched and listened to videos of spoken sentences in which the auditory intelligibility and talker gaze direction were manipulated factorially. We observed a left-dominant temporal lobe sensitivity to the talker's gaze direction, in which the left anterior superior temporal sulcus/gyrus and temporal pole showed an enhanced response to direct gaze - further investigation revealed that this pattern of lateralization was modulated by auditory intelligibility. Our results suggest flexibility in the distribution of neural responses to social cues in the face within the context of a challenging speech perception task

    Time Course of the Involvement of the Right Anterior Superior Temporal Gyrus and the Right Fronto-Parietal Operculum in Emotional Prosody Perception

    Get PDF
    In verbal communication, not only the meaning of the words convey information, but also the tone of voice (prosody) conveys crucial information about the emotional state and intentions of others. In various studies right frontal and right temporal regions have been found to play a role in emotional prosody perception. Here, we used triple-pulse repetitive transcranial magnetic stimulation (rTMS) to shed light on the precise time course of involvement of the right anterior superior temporal gyrus and the right fronto-parietal operculum. We hypothesized that information would be processed in the right anterior superior temporal gyrus before being processed in the right fronto-parietal operculum. Right-handed healthy subjects performed an emotional prosody task. During listening to each sentence a triplet of TMS pulses was applied to one of the regions at one of six time points (400–1900 ms). Results showed a significant main effect of Time for right anterior superior temporal gyrus and right fronto-parietal operculum. The largest interference was observed half-way through the sentence. This effect was stronger for withdrawal emotions than for the approach emotion. A further experiment with the inclusion of an active control condition, TMS over the EEG site POz (midline parietal-occipital junction), revealed stronger effects at the fronto-parietal operculum and anterior superior temporal gyrus relative to the active control condition. No evidence was found for sequential processing of emotional prosodic information from right anterior superior temporal gyrus to the right fronto-parietal operculum, but the results revealed more parallel processing. Our results suggest that both right fronto-parietal operculum and right anterior superior temporal gyrus are critical for emotional prosody perception at a relatively late time period after sentence onset. This may reflect that emotional cues can still be ambiguous at the beginning of sentences, but become more apparent half-way through the sentence
    • …
    corecore