1,760 research outputs found

    An information theoretic characterisation of auditory encoding.

    Get PDF
    The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content

    Heschl's gyrus is more sensitive to tone level than non-primary auditory cortex

    Get PDF
    Previous neuroimaging studies generally demonstrate a growth in the cortical response with an increase in sound level. However, the details of the shape and topographic location of such growth remain largely unknown. One limiting methodological factor has been the relatively sparse sampling of sound intensities. Additionally, most studies have either analysed the entire auditory cortex without differentiating primary and non-primary regions or have limited their analyses to Heschl's gyrus (HG). Here, we characterise the pattern of responses to a 300-Hz tone presented in 6-dB steps from 42 to 96 dB sound pressure level as a function of its sound level, within three anatomically defined auditory areas; the primary area, on HG, and two non-primary areas, consisting of a small area lateral to the axis of HG (the anterior lateral area, ALA) and the posterior part of auditory cortex (the planum temporale, PT). Extent and magnitude of auditory activation increased non-linearly with sound level. In HG, the extent and magnitude were more sensitive to increasing level than in ALA and PT. Thus, HG appears to have a larger involvement in sound-level processing than does ALA or PT

    Reduced responsiveness is an essential feature of chronic fatigue syndrome: A fMRI study

    Get PDF
    BACKGROUND: Although the neural mechanism of chronic fatigue syndrome has been investigated by a number of researchers, it remains poorly understood. METHODS: Using functional magnetic resonance imaging, we studied brain responsiveness in 6 male chronic fatigue syndrome patients and in 7 age-matched male healthy volunteers. Responsiveness of auditory cortices to transient, short-lived, noise reduction was measured while subjects performed a fatigue-inducing continual visual search task. RESULTS: Responsiveness of the task-dependent brain regions was decreased after the fatigue-inducing task in the normal and chronic fatigue syndrome subjects and the decrement of the responsiveness was equivalent between the 2 groups. In contrast, during the fatigue-inducing period, although responsiveness of auditory cortices remained constant in the normal subjects, it was attenuated in the chronic fatigue syndrome patients. In addition, the rate of this attenuation was positively correlated with the subjective sensation of fatigue as measured using a fatigue visual analogue scale, immediately before the magnetic resonance imaging session. CONCLUSION: Chronic fatigue syndrome may be characterised by attenuation of the responsiveness to stimuli not directly related to the fatigue-inducing task

    Auditory feedback control mechanisms do not contribute to cortical hyperactivity within the voice production network in adductor spasmodic dysphonia

    Full text link
    Adductor spasmodic dysphonia (ADSD), the most common form of spasmodic dysphonia, is a debilitating voice disorder characterized by hyperactivity and muscle spasms in the vocal folds during speech. Prior neuroimaging studies have noted excessive brain activity during speech in ADSD participants compared to controls. Speech involves an auditory feedback control mechanism that generates motor commands aimed at eliminating disparities between desired and actual auditory signals. Thus, excessive neural activity in ADSD during speech may reflect, at least in part, increased engagement of the auditory feedback control mechanism as it attempts to correct vocal production errors detected through audition. To test this possibility, functional magnetic resonance imaging was used to identify differences between ADSD participants and age-matched controls in (i) brain activity when producing speech under different auditory feedback conditions, and (ii) resting state functional connectivity within the cortical network responsible for vocalization. The ADSD group had significantly higher activity than the control group during speech (compared to a silent baseline task) in three left-hemisphere cortical regions: ventral Rolandic (sensorimotor) cortex, anterior planum temporale, and posterior superior temporal gyrus/planum temporale. This was true for speech while auditory feedback was masked with noise as well as for speech with normal auditory feedback, indicating that the excess activity was not the result of auditory feedback control mechanisms attempting to correct for perceived voicing errors in ADSD. Furthermore, the ADSD group had significantly higher resting state functional connectivity between sensorimotor and auditory cortical regions within the left hemisphere as well as between the left and right hemispheres, consistent with the view that excessive motor activity frequently co-occurs with increased auditory cortical activity in individuals with ADSD.First author draf

    Early and Late Stage Mechanisms for Vocalization Processing in the Human Auditory System

    Get PDF
    The human auditory system is able to rapidly process incoming acoustic information, actively filtering, categorizing, or suppressing different elements of the incoming acoustic stream. Vocalizations produced by other humans (conspecifics) likely represent the most ethologically-relevant sounds encountered by hearing individuals. Subtle acoustic characteristics of these vocalizations aid in determining the identity, emotional state, health, intent, etc. of the producer. The ability to assess vocalizations is likely subserved by a specialized network of structures and functional connections that are optimized for this stimulus class. Early elements of this network would show sensitivity to the most basic acoustic features of these sounds; later elements may show categorically-selective response patterns that represent high-level semantic organization of different classes of vocalizations. A combination of functional magnetic resonance imaging and electrophysiological studies were performed to investigate and describe some of the earlier and later stage mechanisms of conspecific vocalization processing in human auditory cortices. Using fMRI, cortical representations of harmonic signal content were found along the middle superior temporal gyri between primary auditory cortices along Heschl\u27s gyri and the superior temporal sulci, higher-order auditory regions. Additionally, electrophysiological findings also demonstrated a parametric response profile to harmonic signal content. Utilizing a novel class of vocalizations, human-mimicked versions of animal vocalizations, we demonstrated the presence of a left-lateralized cortical vocalization processing hierarchy to conspecific vocalizations, contrary to previous findings describing similar bilateral networks. This hierarchy originated near primary auditory cortices and was further supported by auditory evoked potential data that suggests differential temporal processing dynamics of conspecific human vocalizations versus those produced by other species. Taken together, these results suggest that there are auditory cortical networks that are highly optimized for processing utterances produced by the human vocal tract. Understanding the function and structure of these networks will be critical for advancing the development of novel communicative therapies and the design of future assistive hearing devices

    Brain activity underlying the recovery of meaning from degraded speech: a functional near-infrared spectroscopy (fNIRS) study

    Get PDF
    The purpose of this study was to establish whether functional near-infrared spectroscopy (fNIRS), an emerging brain-imaging technique based on optical principles, is suitable for studying the brain activity that underlies effortful listening. In an event-related fNIRS experiment, normally-hearing adults listened to sentences that were either clear or degraded (noise vocoded). These sentences were presented simultaneously with a non-speech distractor, and on each trial participants were instructed to attend either to the speech or to the distractor. The primary region of interest for the fNIRS measurements was the left inferior frontal gyrus (LIFG), a cortical region involved in higher-order language processing. The fNIRS results confirmed findings previously reported in the functional magnetic resonance imaging (fMRI) literature. Firstly, the LIFG exhibited an elevated response to degraded versus clear speech, but only when attention was directed towards the speech. This attention-dependent increase in frontal brain activation may be a neural marker for effortful listening. Secondly, during attentive listening to degraded speech, the haemodynamic response peaked significantly later in the LIFG than in superior temporal cortex, possibly reflecting the engagement of working memory to help reconstruct the meaning of degraded sentences. The homologous region in the right hemisphere may play an equivalent role to the LIFG in some left-handed individuals. In conclusion, fNIRS holds promise as a flexible tool to examine the neural signature of effortful listening

    Human brain mechanisms of auditory and audiovisual selective attention

    Get PDF
    Selective attention refers to the process in which certain information is actively selected for conscious processing, while other information is ignored. The aim of the present studies was to investigate the human brain mechanisms of auditory and audiovisual selective attention with functional magnetic resonance imaging (fMRI), electroencephalography (EEG) and magnetoencephalography (MEG). The main focus was on attention-related processing in the auditory cortex. It was found that selective attention to sounds strongly enhances auditory cortex activity associated with processing the sounds. In addition, the amplitude of this attention-related modulation was shown to increase with the presentation rate of attended sounds. Attention to the pitch of sounds and to their location appeared to enhance activity in overlapping auditory-cortex regions. However, attention to location produced stronger activity than attention to pitch in the temporo-parietal junction and frontal cortical regions. In addition, a study on bimodal attentional selection found stronger audiovisual than auditory or visual attention-related modulations in the auditory cortex. These results were discussed in light of Näätänen s attentional-trace theory and other research concerning the brain mechanisms of selective attention.Valikoivalla tarkkaavaisuudella tarkoitetaan prosessia, jossa tietoiseen käsittelyyn valitaan aktiivisesti jotain tietoa ja muu tieto jätetään huomioimatta. Tämän väitöskirjatutkimuksen tavoite oli selvittää kuulotietoon kohdistuvan sekä kuulo- ja näkötietoa yhdistävän valikoivan tarkkaavaisuuden aivomekanismeja ihmisellä. Tutkimusmenetelminä käytettiin toiminnallista magneettikuvausta (fMRI), elektroenkefalografiaa (EEG) ja magnetoenkefalografiaa (MEG). Tutkimus keskittyi erityisesti tarkkaavaisuuden alaiseen tiedonkäsittelyyn kuuloaivokuorella. Tutkimus osoitti, että äänten valikoiva tarkkailu kasvattaa voimakkaasti äänten käsittelyyn liittyvää aktivaatiota kuuloaivokuorella ja että tämä aktivaatio kasvaa äänten esitysnopeuden kasvaessa. Tutkimus antoi myös viitteitä siitä, että äänen korkeuden tarkkailu ja äänen paikan tarkkailu aktivoivat samoja kuuloaivokuoren alueita. Kuitenkin tietyt ohimo- ja päälakilohkojen sekä otsalohkojen alueet näyttäisivät osallistuvan erityisen voimakkaasti äänen paikan tarkkaavaisuuden alaiseen käsittelyyn. Lisäksi havaittiin, että kuulo- ja näkötietoa yhdistävä valikoiva tarkkaavaisuus aktivoi voimakkaammin kuuloaivokuorta kuin pelkän kuulotiedon tai näkötiedon valikoiva tarkkailu. Näitä tutkimustuloksia käsiteltiin Näätäsen tarkkaavaisuusjälki-teorian ja muiden valikoivaa tarkkaavaisuutta koskevien tutkimustulosten valossa

    Effects of cardiac gating on fMRI of the human auditory system

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (leaves 20-22).Guimaraes et al. (1998) showed that sound-evoked fMRI activation in the auditory midbrain was significantly improved by a method which reduces image signal variability associated with cardiac-related brainstem motion. The method, cardiac gating, synchronizes image acquisition to a constant phase of the cardiac cycle. Since that study, several improvements to auditory fMRI have been made, and it is unclear whether cardiac gating still yields worthwhile benefits. The present study re-evaluated the effects of cardiac gating for detecting fMRI activation with current auditory fMRI standards. In 11 experiments, we directly compared fMRI activation for images acquired with a fixed repetition time (ungated) vs. those acquired by triggering image acquisition (gated) to the oxygen saturation at the fingertip (SpO2), an indirect measure of cardiac activity. Three of these experiments compared the effects of gating with the Sp02 signal vs. gating with the R-wave of the electrocardiogram (ECG). fMRI activation was routinely detected at all levels of the auditory pathway from the cochlear nucleus to the auditory cortex. Compared to ungated acquisitions, cardiac gating with the SpO2 reduced image signal variability in all centers of the auditory system and increased the magnitude of activation in the inferior colliculus (p < 0.01) and medial geniculate body (p < 0.1).(cont.) Simultaneous measurements of the SpO2 and ECG indicated that the peak of the SpO2 signal followed the ECG R-wave by approximately 400 ms, placing early images in a motion-stable phase of the cardiac cycle during Sp02-gated experiments. This may account for the fact that image signal variability with Sp02-gated acquisitions was always lower than with ECG-gated acquisitions. That sound-evoked activation could be regularly detected without cardiac gating indicates that gating may not be worth the minimal experimental complexity it entails. However, in experiments attempting to measure responses to sounds that evoke small changes in fMRI signal, especially in the auditory midbrain or thalamus, or when one interested in individual variability rather than group averages, gating may prove extremely beneficial.by Andrew R. Dykstra.S.M
    corecore