113 research outputs found

    Sound-induced enhancement of low-intensity vision: multisensory influences on human sensory-specific cortices and thalamic bodies relate to perceptual enhancement of visual detection sensitivity

    Get PDF
    Combining information across modalities can affect sensory performance. We studied how co-occurring sounds modulate behavioral visual detection sensitivity (d'), and neural responses, for visual stimuli of higher or lower intensity. Co-occurrence of a sound enhanced human detection sensitivity for lower-but not higher-intensity visual targets. Functional magnetic resonance imaging (fMRI) linked this to boosts in activity-levels for sensory-specific visual and auditory cortex, plus multisensory superior temporal sulcus (STS), specifically for a lower-intensity visual event when paired with a sound. Thalamic structures in visual and auditory pathways, the lateral and medial geniculate bodies, respectively (LGB, MGB), showed a similar pattern. Subject-by-subject psychophysical benefits correlated with corresponding fMRI signals in visual, auditory, and multisensory regions. We also analyzed differential "coupling" patterns of LGB and MGB with other regions in the different experimental conditions. Effective-connectivity analyses showed enhanced coupling of sensory-specific thalamic bodies with the affected cortical sites during enhanced detection of lower-intensity visual events paired with sounds. Coupling strength between visual and auditory thalamus with cortical regions, including STS, covaried parametrically with the psychophysical benefit for this specific multisensory context. Our results indicate that multisensory enhancement of detection sensitivity for low-contrast visual stimuli by co-occurring sounds reflects a brain network involving not only established multisensory STS and sensory-specific cortex but also visual and auditory thalamus

    Localizing Pain Matrix and Theory of Mind networks with both verbal and non-verbal stimuli

    Get PDF
    Functional localizer tasks allow researchers to identify brain regions in each individual's brain, using a combination of anatomical and functional constraints. In this study, we compare three social cognitive localizer tasks, designed to efficiently identify regions in the "Pain Matrix," recruited in response to a person's physical pain, and the "Theory of Mind network," recruited in response to a person's mental states (i.e. beliefs and emotions). Participants performed three tasks: first, the verbal false-belief stories task; second, a verbal task including stories describing physical pain versus emotional suffering; and third, passively viewing a non-verbal animated movie, which included segments depicting physical pain and beliefs and emotions. All three localizers were efficient in identifying replicable, stable networks in individual subjects. The consistency across tasks makes all three tasks viable localizers. Nevertheless, there were small reliable differences in the location of the regions and the pattern of activity within regions, hinting at more specific representations. The new localizers go beyond those currently available: first, they simultaneously identify two functional networks with no additional scan time, and second, the non-verbal task extends the populations in whom functional localizers can be applied. These localizers will be made publicly available.National Institutes of Health (U.S.) (Grant 1R01 MH096914-01A1

    Inflammation causes mood changes through alterations in subgenual cingulate activity and mesolimbic connectivity

    Get PDF
    BACKGROUND: Inflammatory cytokines are implicated in the pathophysiology of depression. In rodents, systemically administered inflammatory cytokines induce depression-like behavior. Similarly in humans, therapeutic interferon-alpha induces clinical depression in a third of patients. Conversely, patients with depression also show elevated pro-inflammatory cytokines. OBJECTIVES: To determine the neural mechanisms underlying inflammation-associated mood change and modulatory effects on circuits involved in mood homeostasis and affective processing. METHODS: In a double-blind, randomized crossover study, 16 healthy male volunteers received typhoid vaccination or saline (placebo) injection in two experimental sessions. Mood questionnaires were completed at baseline and at 2 and 3 hours. Two hours after injection, participants performed an implicit emotional face perception task during functional magnetic resonance imaging. Analyses focused on neurobiological correlates of inflammation-associated mood change and affective processing within regions responsive to emotional expressions and implicated in the etiology of depression. RESULTS: Typhoid but not placebo injection produced an inflammatory response indexed by increased circulating interleukin-6 and significant mood reduction at 3 hours. Inflammation-associated mood deterioration correlated with enhanced activity within subgenual anterior cingulate cortex (sACC) (a region implicated in the etiology of depression) during emotional face processing. Furthermore, inflammation-associated mood change reduced connectivity of sACC to amygdala, medial prefrontal cortex, nucleus accumbens, and superior temporal sulcus, which was modulated by peripheral interleukin-6. CONCLUSIONS: Inflammation-associated mood deterioration is reflected in changes in sACC activity and functional connectivity during evoked responses to emotional stimuli. Peripheral cytokine

    The Neural Substrates of Multisensory Speech Perception

    Get PDF
    Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS

    The role of motion in the neural representation of social interactions in the posterior temporal cortex

    Get PDF
    Humans are an inherently social species, with multiple focal brain regions sensitive to various visual social cues such as faces, bodies, and biological motion. More recently, research has begun to investigate how the brain responds to more complex, naturalistic social scenes, identifying a region in the posterior superior temporal sulcus (SI-pSTS; i.e., social interaction pSTS), among others, as an important region for processing social interaction. This research, however, has presented images or videos, and thus the contribution of motion to social interaction perception in these brain regions is not yet understood. In the current study, 22 participants viewed videos, image sequences, scrambled image sequences and static images of either social interactions or non-social independent actions. Combining univariate and multivariate analyses, we confirm that bilateral SI-pSTS plays a central role in dynamic social interaction perception but is much less involved when ‘interactiveness’ is conveyed solely with static cues. Regions in the social brain, including SI-pSTS and extrastriate body area (EBA), showed sensitivity to both motion and interactive content. While SI-pSTS is somewhat more tuned to video interactions than is EBA, both bilateral SI-pSTS and EBA showed a greater response to social interactions compared to non-interactions and both regions responded more strongly to videos than static images. Indeed, both regions showed higher responses to interactions than independent actions in videos and intact sequences, but not in other conditions. Exploratory multivariate regression analyses suggest that selectivity for simple visual motion does not in itself drive interactive sensitivity in either SI-pSTS or EBA. Rather, selectivity for interactions expressed in point-light animations, and selectivity for static images of bodies, make positive and independent contributions to this effect across the LOTC region. Our results strongly suggest that EBA and SI-pSTS work together during dynamic interaction perception, at least when interactive information is conveyed primarily via body information. As such, our results are also in line with proposals of a third visual stream supporting dynamic social scene perception

    Faces and Eyes in Human Lateral Prefrontal Cortex

    Get PDF
    Much of the work on face-selective neural activity has focused on posterior, ventral areas of the human and non-human primate brain. However, electrophysiological and fMRI studies have identified face responses in the prefrontal cortex. Here we used fMRI to characterize these responses in the human prefrontal cortex compared with face selectivity in posterior ventral region. We examined a region at the junction of the right inferior frontal sulcus and the precentral sulcus (right inferior frontal junction or rIFJ) that responds more to faces than to several other object categories. We find that the rIFJ and the right fusiform face area (rFFA) are broadly similar in their responses to whole faces, headless bodies, tools, and scenes. Strikingly, however, while the rFFA preferentially responds to the whole face, the rIFJ response to faces appears to be driven primarily by the eyes. This dissociation provides clues to the functional role of the rIFJ face response. We speculate on this role with reference to emotion perception, gaze perception, and to behavioral relevance more generally

    fMR-Adaptation Reveals Invariant Coding of Biological Motion on the Human STS

    Get PDF
    Neuroimaging studies of biological motion perception have found a network of coordinated brain areas, the hub of which appears to be the human posterior superior temporal sulcus (STSp). Understanding the functional role of the STSp requires characterizing the response tuning of neuronal populations underlying the BOLD response. Thus far our understanding of these response properties comes from single-unit studies of the monkey anterior STS, which has individual neurons tuned to body actions, with a small population invariant to changes in viewpoint, position and size of the action being viewed. To measure for homologous functional properties on the human STS, we used fMR-adaptation to investigate action, position and size invariance. Observers viewed pairs of point-light animations depicting human actions that were either identical, differed in the action depicted, locally scrambled, or differed in the viewing perspective, the position or the size. While extrastriate hMT+ had neural signals indicative of viewpoint specificity, the human STS adapted for all of these changes, as compared to viewing two different actions. Similar findings were observed in more posterior brain areas also implicated in action recognition. Our findings are evidence for viewpoint invariance in the human STS and related brain areas, with the implication that actions are abstracted into object-centered representations during visual analysis

    White-Matter Connectivity between Face-Responsive Regions in the Human Brain

    Get PDF
    Face recognition is of major social importance and involves highly selective brain regions thought to be organized in a distributed functional network. However, the exact architecture of interconnections between these regions remains unknown. We used functional magnetic resonance imaging to identify face-responsive regions in 22 participants and then employed diffusion tensor imaging with probabilistic tractography to establish the white-matter pathways between these functionally defined regions. We identified strong white-matter connections between the occipital face area (OFA) and fusiform face area (FFA), with a significant right-hemisphere predominance. We found no evidence for direct anatomical connections between FFA and superior temporal sulcus (STS) or between OFA and STS, contrary to predictions based on current cognitive models. Instead, our findings point to segregated processing along a ventral extrastriate visual pathway to OFA-FFA and another more dorsal system connected to STS and frontoparietal areas. In addition, early occipital areas were found to have direct connections to the amygdala, which might underlie a rapid recruitment of limbic brain areas by visual inputs bypassing more elaborate extrastriate cortical processing. These results unveil the structural neural architecture of the human face recognition system and provide new insights on how distributed face-responsive areas may work togethe
    corecore