3,006 research outputs found

    The BOLD signal and neurovascular coupling in autism

    Get PDF
    BOLD (blood oxygen level dependent) fMRI (functional magnetic resonance imaging) is commonly used to study differences in neuronal activity between human populations. As the BOLD response is an indirect measure of neuronal activity, meaningful interpretation of differences in BOLD responses between groups relies upon a stable relationship existing between neuronal activity and the BOLD response across these groups. However, this relationship can be altered by changes in neurovascular coupling or energy consumption, which would lead to problems in identifying differences in neuronal activity. In this review, we focus on fMRI studies of people with autism, and comparisons that are made of their BOLD responses with those of control groups. We examine neurophysiological differences in autism that may alter neurovascular coupling or energy use, discuss recent studies that have used fMRI to identify differences between participants with autism and control participants, and explore experimental approaches that could help attribute between-group differences in BOLD signals to either neuronal or neurovascular factors

    Investigating the Neural Basis of Audiovisual Speech Perception with Intracranial Recordings in Humans

    Get PDF
    Speech is inherently multisensory, containing auditory information from the voice and visual information from the mouth movements of the talker. Hearing the voice is usually sufficient to understand speech, however in noisy environments or when audition is impaired due to aging or disabilities, seeing mouth movements greatly improves speech perception. Although behavioral studies have well established this perceptual benefit, it is still not clear how the brain processes visual information from mouth movements to improve speech perception. To clarify this issue, I studied the neural activity recorded from the brain surfaces of human subjects using intracranial electrodes, a technique known as electrocorticography (ECoG). First, I studied responses to noisy speech in the auditory cortex, specifically in the superior temporal gyrus (STG). Previous studies identified the anterior parts of the STG as unisensory, responding only to auditory stimulus. On the other hand, posterior parts of the STG are known to be multisensory, responding to both auditory and visual stimuli, which makes it a key region for audiovisual speech perception. I examined how these different parts of the STG respond to clear versus noisy speech. I found that noisy speech decreased the amplitude and increased the across-trial variability of the response in the anterior STG. However, possibly due to its multisensory composition, posterior STG was not as sensitive to auditory noise as the anterior STG and responded similarly to clear and noisy speech. I also found that these two response patterns in the STG were separated by a sharp boundary demarcated by the posterior-most portion of the Heschl’s gyrus. Second, I studied responses to silent speech in the visual cortex. Previous studies demonstrated that visual cortex shows response enhancement when the auditory component of speech is noisy or absent, however it was not clear which regions of the visual cortex specifically show this response enhancement and whether this response enhancement is a result of top-down modulation from a higher region. To test this, I first mapped the receptive fields of different regions in the visual cortex and then measured their responses to visual (silent) and audiovisual speech stimuli. I found that visual regions that have central receptive fields show greater response enhancement to visual speech, possibly because these regions receive more visual information from mouth movements. I found similar response enhancement to visual speech in frontal cortex, specifically in the inferior frontal gyrus, premotor and dorsolateral prefrontal cortices, which have been implicated in speech reading in previous studies. I showed that these frontal regions display strong functional connectivity with visual regions that have central receptive fields during speech perception

    The mechanisms of tinnitus: perspectives from human functional neuroimaging

    Get PDF
    In this review, we highlight the contribution of advances in human neuroimaging to the current understanding of central mechanisms underpinning tinnitus and explain how interpretations of neuroimaging data have been guided by animal models. The primary motivation for studying the neural substrates of tinnitus in humans has been to demonstrate objectively its representation in the central auditory system and to develop a better understanding of its diverse pathophysiology and of the functional interplay between sensory, cognitive and affective systems. The ultimate goal of neuroimaging is to identify subtypes of tinnitus in order to better inform treatment strategies. The three neural mechanisms considered in this review may provide a basis for TI classification. While human neuroimaging evidence strongly implicates the central auditory system and emotional centres in TI, evidence for the precise contribution from the three mechanisms is unclear because the data are somewhat inconsistent. We consider a number of methodological issues limiting the field of human neuroimaging and recommend approaches to overcome potential inconsistency in results arising from poorly matched participants, lack of appropriate controls and low statistical power

    Contributions of cortical feedback to sensory processing in primary visual cortex

    Get PDF
    Closing the structure-function divide is more challenging in the brain than in any other organ (Lichtman and Denk, 2011). For example, in early visual cortex, feedback projections to V1 can be quantified (e.g., Budd, 1998) but the understanding of feedback function is comparatively rudimentary (Muckli and Petro, 2013). Focusing on the function of feedback, we discuss how textbook descriptions mask the complexity of V1 responses, and how feedback and local activity reflects not only sensory processing but internal brain states

    Noninvasive fMRI investigation of interaural level difference processing the rat auditory subcortex

    Get PDF
    published_or_final_versio

    Functional ultrasound reveals effects of MRI acoustic noise on brain function

    Get PDF
    Loud acoustic noise from the scanner during functional magnetic resonance imaging (fMRI) can affect functional connectivity (FC) observed in the resting state, but the exact effect of the MRI acoustic noise on resting state FC is not well understood. Functional ultrasound (fUS) is a neuroimaging method that visualizes brain activity based on relative cerebral blood volume (rCBV), a similar neurovascular coupling response to that measured by fMRI, but without the audible acoustic noise. In this study, we investigated the effects of different acoustic noise levels (silent, 80 dB, and 110 dB) on FC by measuring resting state fUS (rsfUS) in awake mice in an environment similar to fMRI measurement. Then, we compared the results to those of resting state fMRI (rsfMRI) conducted using an 11.7 Tesla scanner. RsfUS experiments revealed a significant reduction in FC between the retrosplenial dysgranular and auditory cortexes (0.56 ± 0.07 at silence vs 0.05 ± 0.05 at 110 dB, p=.01) and a significant increase in FC anticorrelation between the infralimbic and motor cortexes (−0.21 ± 0.08 at silence vs −0.47 ± 0.04 at 110 dB, p=.017) as acoustic noise increased from silence to 80 dB and 110 dB, with increased consistency of FC patterns between rsfUS and rsfMRI being found with the louder noise conditions. Event-related auditory stimulation experiments using fUS showed strong positive rCBV changes (16.5% ± 2.9% at 110 dB) in the auditory cortex, and negative rCBV changes (−6.7% ± 0.8% at 110 dB) in the motor cortex, both being constituents of the brain network that was altered by the presence of acoustic noise in the resting state experiments. Anticorrelation between constituent brain regions of the default mode network (such as the infralimbic cortex) and those of task-positive sensorimotor networks (such as the motor cortex) is known to be an important feature of brain network antagonism, and has been studied as a biological marker of brain disfunction and disease. This study suggests that attention should be paid to the acoustic noise level when using rsfMRI to evaluate the anticorrelation between the default mode network and task-positive sensorimotor network.journal articl

    Time Scales of Auditory Habituation in the Amygdala and Cerebral Cortex

    Get PDF
    Habituation is a fundamental form of learning manifested by a decrement of neuronal responses to repeated sensory stimulation. In addition, habituation is also known to occur on the behavioral level, manifested by reduced emotional reactions to repeatedly presented affective stimuli. It is, however, not clear which brain areas show a decline in activity during repeated sensory stimulation on the same time scale as reduced valence and arousal experience and whether these areas can be delineated from other brain areas with habituation effects on faster or slower time scales. These questions were addressed using functional magnetic resonance imaging acquired during repeated stimulation with piano melodies. The magnitude of functional responses in the laterobasal amygdala and in related cortical areas and that of valence and arousal ratings, given after each music presentation, declined in parallel over the experiment. In contrast to this long-term habituation (43 min), short-term decreases occurring within seconds were found in the primary auditory cortex. Sustained responses that remained throughout the whole investigated time period were detected in the ventrolateral prefrontal cortex extending to the dorsal part of the anterior insular cortex. These findings identify an amygdalocortical network that forms the potential basis of affective habituation in human
    corecore