63 research outputs found

    Predictive Modeling for Diagnostic Tests with High Specificity, but Low Sensitivity: A Study of the Glycerol Test in Patients with Suspected Menière’s Disease

    Full text link
    A high specificity does not ensure that the expected benefit of a diagnostic test outweighs its cost. Problems arise, in particular, when the investigation is expensive, the prevalence of a positive test result is relatively small for the candidate patients, and the sensitivity of the test is low so that the information provided by a negative result is virtually negligible. The consequence may be that a potentially useful test does not gain broader acceptance. Here we show how predictive modeling can help to identify patients for whom the ratio of expected benefit and cost reaches an acceptable level so that testing these patients is reasonable even though testing all patients might be considered wasteful. Our application example is based on a retrospective study of the glycerol test, which is used to corroborate a suspected diagnosis of Menière’s disease. Using the pretest hearing thresholds at up to 10 frequencies, predictions were made by K-nearest neighbor classification or logistic regression. Both methods estimate, based on results from previous patients, the posterior probability that performing the considered test in a new patient will have a positive outcome. The quality of the prediction was evaluated using leave-one-out cross-validation, making various assumptions about the costs and benefits of testing. With reference to all 356 cases, the probability of a positive test result was almost 0.4. For subpopulations selected by K-nearest neighbor classification, which was clearly superior to logistic regression, this probability could be increased up to about 0.6. Thus, the odds of a positive test result were more than doubled

    Auditory temporal processing in healthy aging: a magnetoencephalographic study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Impaired speech perception is one of the major sequelae of aging. In addition to peripheral hearing loss, central deficits of auditory processing are supposed to contribute to the deterioration of speech perception in older individuals. To test the hypothesis that auditory temporal processing is compromised in aging, auditory evoked magnetic fields were recorded during stimulation with sequences of 4 rapidly recurring speech sounds in 28 healthy individuals aged 20 – 78 years.</p> <p>Results</p> <p>The decrement of the N1m amplitude during rapid auditory stimulation was not significantly different between older and younger adults. The amplitudes of the middle-latency P1m wave and of the long-latency N1m, however, were significantly larger in older than in younger participants.</p> <p>Conclusion</p> <p>The results of the present study do not provide evidence for the hypothesis that auditory temporal processing, as measured by the decrement (short-term habituation) of the major auditory evoked component, the N1m wave, is impaired in aging. The differences between these magnetoencephalographic findings and previously published behavioral data might be explained by differences in the experimental setting between the present study and previous behavioral studies, in terms of speech rate, attention, and masking noise. Significantly larger amplitudes of the P1m and N1m waves suggest that the cortical processing of individual sounds differs between younger and older individuals. This result adds to the growing evidence that brain functions, such as sensory processing, motor control and cognitive processing, can change during healthy aging, presumably due to experience-dependent neuroplastic mechanisms.</p

    The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    Get PDF
    BACKGROUND: A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i) the amplitude of the N1m response and (ii) its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. RESULTS: Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak) and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. CONCLUSION: These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex

    Hands help hearing: Facilitatory audiotactile interaction at low sound-intensity levels

    Get PDF
    Auditory and vibrotactile stimuli share similar temporal patterns. A psychophysical experiment was performed to test whether this similarity would lead into an intermodal bias in perception of sound intensity. Nine normal-hearing subjects performed a loudness-matching task of faint tones, adjusting the probe tone to sound equally loud as a reference tone. The task was performed both when the subjects were touching and when they were not touching a tube that vibrated simultaneously with the probe tone. The subjects chose on average 12% lower intensities (p<0.01) for the probe tone when they touched the tube, suggesting facilitatory interaction between auditory and tactile senses in normal-hearing subjects.Peer reviewe

    On chirp stimuli and neural synchrony in the suprathreshold auditory brainstem response

    Get PDF
    The chirp-evoked ABR has been regarded as a more synchronous response than the click-evoked ABR, referring to the belief that the chirp stimulates lower-, mid-, and higher-frequency regions of the cochlea simultaneously. In this study a variety of tools were used to analyze the synchronicity of ABRs evoked by chirp- and click-stimuli at 40 dB HL in 32 normal hearing subjects aged 18 to 55 years (mean=24.8 years, SD=7.1 years). Compared to the click-evoked ABRs, the chirp-evoked ABRs showed larger wave V amplitudes, but an absence of earlier waves in the grand averages, larger wave V latency variance, smaller FFT magnitudes at the higher component frequencies, and larger phase variance at the higher component frequencies. These results strongly suggest that the chirp-evoked ABRs exhibited less synchrony than the click-evoked ABRs in this study. It is proposed that the temporal compensation offered by chirp stimuli is sufficient to increase neural recruitment (as measured by wave V amplitude), but that destructive phase interactions still exist along the cochlea partition, particularly in the low frequency portions of the cochlea where more latency jitter is expected. The clinical implications of these findings are discussed. (C) 2010 Acoustical Society of America. [DOI: 10.1121/1.3436527

    Localising the auditory N1m with event-related beamformers:localisation accuracy following bilateral and unilateral stimulation

    Get PDF
    The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses

    Sensitivity of the human auditory cortex to acoustic degradation of speech and non-speech sounds

    Get PDF
    The perception of speech is usually an effortless and reliable process even in highly adverse listening conditions. In addition to external sound sources, the intelligibility of speech can be reduced by degradation of the structure of speech signal itself, for example by digital compression of sound. This kind of distortion may be even more detrimental to speech intelligibility than external distortion, given that the auditory system will not be able to utilize sound source-specific acoustic features, such as spatial location, to separate the distortion from the speech signal. The perceptual consequences of acoustic distortions on speech intelligibility have been extensively studied. However, the cortical mechanisms of speech perception in adverse listening conditions are not well known at present, particularly in situations where the speech signal itself is distorted. The aim of this thesis was to investigate the cortical mechanisms underlying speech perception in conditions where speech is less intelligible due to external distortion or as a result of digital compression. In the studies of this thesis, the intelligibility of speech was varied either by digital compression or addition of stochastic noise. Cortical activity related to the speech stimuli was measured using magnetoencephalography (MEG). The results indicated that degradation of speech sounds by digital compression enhanced the evoked responses originating from the auditory cortex, whereas addition of stochastic noise did not modulate the cortical responses. Furthermore, it was shown that if the distortion was presented continuously in the background, the transient activity of auditory cortex was delayed. On the perceptual level, digital compression reduced the comprehensibility of speech more than additive stochastic noise. In addition, it was also demonstrated that prior knowledge of speech content enhanced the intelligibility of distorted speech substantially, and this perceptual change was associated with an increase in cortical activity within several regions adjacent to auditory cortex. In conclusion, the results of this thesis show that the auditory cortex is very sensitive to the acoustic features of the distortion, while at later processing stages, several cortical areas reflect the intelligibility of speech. These findings suggest that the auditory system rapidly adapts to the variability of the auditory environment, and can efficiently utilize previous knowledge of speech content in deciphering acoustically degraded speech signals.Puheen havaitseminen on useimmiten vaivatonta ja luotettavaa myös erittäin huonoissa kuunteluolosuhteissa. Puheen ymmärrettävyys voi kuitenkin heikentyä ympäristön häiriölähteiden lisäksi myös silloin, kun puhesignaalin rakennetta muutetaan esimerkiksi pakkaamalla digitaalista ääntä. Tällainen häiriö voi heikentää ymmärrettävyyttä jopa ulkoisia häiriöitä voimakkaammin, koska kuulojärjestelmä ei pysty hyödyntämään äänilähteen ominaisuuksia, kuten äänen tulosuuntaa, häiriön erottelemisessa puheesta. Akustisten häiriöiden vaikutuksia puheen havaitsemiseen on tutkttu laajalti, mutta havaitsemiseen liittyvät aivomekanismit tunnetaan edelleen melko puutteelisesti etenkin tilanteissa, joissa itse puhesignaali on laadultaan heikentynyt. Tämän väitöskirjan tavoitteena oli tutkia puheen havaitsemisen aivomekanismeja tilanteissa, joissa puhesignaali on vaikeammin ymmärrettävissä joko ulkoisen äänilähteen tai digitaalisen pakkauksen vuoksi. Väitöskirjan neljässä osatutkimuksessa lyhyiden puheäänien ja jatkuvan puheen ymmärrettävyyttä muokattiin joko digitaalisen pakkauksen kautta tai lisäämällä puhesignaaliin satunnaiskohinaa. Puheärsykkeisiin liittyvää aivotoimintaa tutkittiin magnetoenkefalografia-mittauksilla. Tutkimuksissa havaittiin, että kuuloaivokuorella syntyneet herätevasteet voimistuivat, kun puheääntä pakattiin digitaalisesti. Sen sijaan puheääniin lisätty satunnaiskohina ei vaikuttanut herätevasteisiin. Edelleen, mikäli puheäänien taustalla esitettiin jatkuvaa häiriötä, kuuloaivokuoren aktivoituminen viivästyi häiriön intensiteetin kasvaessa. Kuuntelukokeissa havaittiin, että digitaalinen pakkaus heikentää puheäänien ymmärrettävyyttä voimakkaammin kuin satunnaiskohina. Lisäksi osoitettiin, että aiempi tieto puheen sisällöstä paransi merkittävästi häiriöisen puheen ymmärrettävyyttä, mikä heijastui aivotoimintaan kuuloaivokuoren viereisillä aivoalueilla siten, että ymmärrettävä puhe aiheutti suuremman aktivaation kuin heikosti ymmärrettävä puhe. Väitöskirjan tulokset osoittavat, että kuuloaivokuori on erittäin herkkä puheäänien akustisille häiriöille, ja myöhemmissä prosessoinnin vaiheissa useat kuuloaivokuoren viereiset aivoalueet heijastavat puheen ymmärrettävyyttä. Tulosten mukaan voi olettaa, että kuulojärjestelmä mukautuu nopeasti ääniympäristön vaihteluihin muun muassa hyödyntämällä aiempaa tietoa puheen sisällöstä tulkitessaan häiriöistä puhesignaalia

    Selective Attention Increases Both Gain and Feature Selectivity of the Human Auditory Cortex

    Get PDF
    Background. An experienced car mechanic can often deduce what’s wrong with a car by carefully listening to the sound of the ailing engine, despite the presence of multiple sources of noise. Indeed, the ability to select task-relevant sounds for awareness, whilst ignoring irrelevant ones, constitutes one of the most fundamental of human faculties, but the underlying neural mechanisms have remained elusive. While most of the literature explains the neural basis of selective attention by means of an increase in neural gain, a number of papers propose enhancement in neural selectivity as an alternative or a complementary mechanism. Methodology/Principal Findings. Here, to address the question whether pure gain increase alone can explain auditory selective attention in humans, we quantified the auditory cortex frequency selectivity in 20 healthy subjects by masking 1000-Hz tones by continuous noise masker with parametrically varying frequency notches around the tone frequency (i.e., a notched-noise masker). The task of the subjects was, in different conditions, to selectively attend to either occasionally occurring slight increments in tone frequency (1020 Hz), tones of slightly longer duration, or ignore the sounds. In line with previous studies, in the ignore condition, the global field power (GFP) of event-related brain responses at 100 ms from the stimulus onset to the 1000-Hz tones was suppressed as a function of the narrowing of the notch width. During the selective attention conditions, the suppressant effect of the noise notch width on GFP was decreased, but as a function significantly different from a multiplicative one expected on the basis of simple gain model of selective attention. Conclusions/Significance. Our results suggest that auditory selective attention in humans cannot be explained by a gai

    Insights on the Neuromagnetic Representation of Temporal Asymmetry in Human Auditory Cortex.

    Get PDF
    Communication sounds are typically asymmetric in time and human listeners are highly sensitive to this short-term temporal asymmetry. Nevertheless, causal neurophysiological correlates of auditory perceptual asymmetry remain largely elusive to our current analyses and models. Auditory modelling and animal electrophysiological recordings suggest that perceptual asymmetry results from the presence of multiple time scales of temporal integration, central to the auditory periphery. To test this hypothesis we recorded auditory evoked fields (AEF) elicited by asymmetric sounds in humans. We found a strong correlation between perceived tonal salience of ramped and damped sinusoids and the AEFs, as quantified by the amplitude of the N100m dynamics. The N100m amplitude increased with stimulus half-life time, showing a maximum difference between the ramped and damped stimulus for a modulation half-life time of 4 ms which is greatly reduced at 0.5 ms and 32 ms. This behaviour of the N100m closely parallels psychophysical data in a manner that: i) longer half-life times are associated with a stronger tonal percept, and ii) perceptual differences between damped and ramped are maximal at 4 ms half-life time. Interestingly, differences in evoked fields were significantly stronger in the right hemisphere, indicating some degree of hemispheric specialisation. Furthermore, the N100m magnitude was successfully explained by a pitch perception model using multiple scales of temporal integration of auditory nerve activity patterns. This striking correlation between AEFs, perception, and model predictions suggests that the physiological mechanisms involved in the processing of pitch evoked by temporal asymmetric sounds are reflected in the N100m
    corecore