76 research outputs found

    Vestibular receptors contribute to the cortical auditory evoked potentials

    Get PDF
    Abstract: Acoustic sensitivity of the vestibular apparatus is well-established, but the contribution of vestibular receptors to the late auditory evoked potentials of cortical origin is unknown. Evoked potentials from 500 Hz tone pips were recorded using 70 channel EEG at several intensities below and above the vestibular acoustic threshold, as determined by vestibular evoked myogenic potentials (VEMPs). In healthy subjects both auditory mid- and long-latency auditory evoked potentials (AEPs), consisting of Na, Pa, N1 and P2 waves, were observed in the sub-threshold conditions. However, in passing through the vestibular threshold, systematic changes were observed in the morphology of the potentials and in the intensity dependence of their amplitude and latency. These changes were absent in a patient without functioning vestibular receptors. In particular, for the healthy subjects there was a fronto-central negativity, which appeared at about 42 ms, referred to as an N42, prior to the AEP N1. Source analysis of both the N42 and N1 indicated involvement of cingulate cortex, as well as bilateral superior temporal cortex. Our findings are best explained by vestibular receptors contributing to what were hitherto considered as purely auditory evoked potentials and in addition tentatively identify a new component that appears to be primarily of vestibular origin

    Source analysis of short and long latency vestibular-evoked potentials (VsEPs) produced by left versus right ear air-conducted 500 Hz pips

    Get PDF
    Todd et al. (2014) have recently demonstrated the presence of vestibular dependent changes both in the morphology and in the intensity dependence of auditory evoked potentials (AEPs) when passing through the vestibular threshold as determined by vestibular evoked myogenic potentials (VEMPs). In this paper we extend this work by comparing left vs. right ear stimulation and by conducting a source analysis of the resulting evoked potentials of short and long latency. Ten healthy, right-handed subjects were recruited and evoked potentials were recorded to both left- and right-ear sound stimulation, above and below vestibular threshold. Below VEMP threshold, typical AEPs were recorded, consisting of mid-latency (MLR) waves Na and Pa followed by long latency AEPs (LAEPs) N1 and P2. In the supra-threshold condition, the expected changes in morphology were observed, consisting of: (1) short-latency vestibular evoked potentials (VsEPs) which have no auditory correlate, i.e. the ocular VEMP (OVEMP) and inion response related potentials; (2) a later deflection, labelled N42/P52, followed by the LAEPs N1 and P2. Statistical analysis of the vestibular dependent responses indicated a contralateral effect for inion related short-latency responses and a left-ear/right-hemisphere advantage for the long-latency responses. Source analysis indicated that the short-latency effects may be mediated by a contralateral projection to left cerebellum, while the long-latency effects were mediated by a contralateral projection to right cingulate cortex. In addition we found evidence of a possible vestibular contribution to the auditory T-complex in radial temporal lobe sources. These last results raise the possibility that acoustic activation of the otolith organs could potentially contribute to auditory processing

    Vestibular evoked potentials (VsEPs) of cortical origin produced by impulsive acceleration applied at the nasion

    Get PDF
    Abstract: We report the results of a study to record vestibular evoked potentials (VsEPs) of cortical origin produced by impulsive acceleration (IA). In a sample of 12 healthy participants, evoked potentials recorded by 70 channel electroencephalography were obtained by IA stimulation at the nasion and compared with evoked potentials from the same stimulus applied to the forefingers. The nasion stimulation gave rise to a series of positive and negative deflections in the latency range of 26–72 ms, which were dependent on the polarity of the applied IA. In contrast, evoked potentials from the fingers were characterised by a single N50/P50 deflection at about 50 ms and were polarity invariant. Source analysis confirmed that the finger evoked potentials were somatosensory in origin, i.e. were somatosensory evoked potentials, and suggested that the nasion evoked potentials plausibly included vestibular midline and frontal sources, as well as contributions from the eyes, and thus were likely VsEPs. These results show considerable promise as a new method for assessment of the central vestibular system by means of VsEPs produced by IA applied to the head

    Weak Vestibular Response in Persistent Developmental Stuttering

    Get PDF
    From Frontiers via Jisc Publications RouterHistory: collection 2021, received 2021-01-31, accepted 2021-06-14, epub 2021-09-01Publication status: PublishedVibrational energy created at the larynx during speech will deflect vestibular mechanoreceptors in humans (Todd et al., 2008; Curthoys, 2017; Curthoys et al., 2019). Vestibular-evoked myogenic potential (VEMP), an indirect measure of vestibular function, was assessed in 15 participants who stutter, with a non-stutter control group of 15 participants paired on age and sex. VEMP amplitude was 8.5 dB smaller in the stutter group than the non-stutter group (p = 0.035, 95% CI [−0.9, −16.1], t = −2.1, d = −0.8, conditional R2 = 0.88). The finding is subclinical as regards gravitoinertial function, and is interpreted with regard to speech-motor function in stuttering. There is overlap between brain areas receiving vestibular innervation, and brain areas identified as important in studies of persistent developmental stuttering. These include the auditory brainstem, cerebellar vermis, and the temporo-parietal junction. The finding supports the disruptive rhythm hypothesis (Howell et al., 1983; Howell, 2004) in which sensory inputs additional to own speech audition are fluency-enhancing when they coordinate with ongoing speech

    On prediction of aided behavioural measures using speech auditory brainstem responses and decision trees

    Get PDF
    From PLOS via Jisc Publications RouterHistory: collection 2021, received 2021-06-03, accepted 2021-11-03, epub 2021-11-16Publication status: PublishedFunder: Manchester Biomedical Research Centre (GB)Funder: Manchester Biomedical Research Centre; funder-id: http://dx.doi.org/10.13039/100014653Funder: Engineering and Physical Sciences Research Council (GB); Grant(s): EP/M026728/1Funder: Engineering and Physical Sciences Research Council; funder-id: http://dx.doi.org/10.13039/501100000266; Grant(s): EP/M026728/1Current clinical strategies to assess benefits from hearing aids (HAs) are based on self-reported questionnaires and speech-in-noise (SIN) tests; which require behavioural cooperation. Instead, objective measures based on Auditory Brainstem Responses (ABRs) to speech stimuli would not require the individuals’ cooperation. Here, we re-analysed an existing dataset to predict behavioural measures with speech-ABRs using regression trees. Ninety-two HA users completed a self-reported questionnaire (SSQ-Speech) and performed two aided SIN tests: sentences in noise (BKB-SIN) and vowel-consonant-vowels (VCV) in noise. Speech-ABRs were evoked by a 40 ms [da] and recorded in 2x2 conditions: aided vs. unaided and quiet vs. background noise. For each recording condition, two sets of features were extracted: 1) amplitudes and latencies of speech-ABR peaks, 2) amplitudes and latencies of speech-ABR F0 encoding. Two regression trees were fitted for each of the three behavioural measures with either feature set and age, digit-span forward and backward, and pure tone average (PTA) as possible predictors. The PTA was the only predictor in the SSQ-Speech trees. In the BKB-SIN trees, performance was predicted by the aided latency of peak F in quiet for participants with PTAs between 43 and 61 dB HL. In the VCV trees, performance was predicted by the aided F0 encoding latency and the aided amplitude of peak VA in quiet for participants with PTAs ≤ 47 dB HL. These findings indicate that PTA was more informative than any speech-ABR measure, as these were relevant only for a subset of the participants. Therefore, speech-ABRs evoked by a 40 ms [da] are not a clinical predictor of behavioural measures in HA users

    Effects of Age and Noise Exposure on Proxy Measures of Cochlear Synaptopathy

    Get PDF
    Although there is strong histological evidence for age-related synaptopathy in humans, evidence for the existence of noise-induced cochlear synaptopathy in humans is inconclusive. Here, we sought to evaluate the relative contributions of age and noise exposure to cochlear synaptopathy using a series of electrophysiological and behavioral measures. We extended an existing cohort by including 33 adults in the age range 37 to 60, resulting in a total of 156 participants, with the additional older participants resulting in a weakening of the correlation between lifetime noise exposure and age. We used six independent regression models (corrected for multiple comparisons), in which age, lifetime noise exposure, and high-frequency audiometric thresholds were used to predict measures of synaptopathy, with a focus on differential measures. The models for auditory brainstem responses, envelope-following responses, interaural phase discrimination, and the co-ordinate response measure of speech perception were not statistically significant. However, both age and noise exposure were significant predictors of performance on the digit triplet test of speech perception in noise, with greater noise exposure (unexpectedly) predicting better performance in the 80 dB sound pressure level (SPL) condition and greater age predicting better performance in the 40 dB SPL condition. Amplitude modulation detection thresholds were also significantly predicted by age, with older listeners performing better than younger listeners at 80 dB SPL. Overall, the results are inconsistent with the predicted effects of synaptopathy

    Supra-threshold auditory brainstem response amplitudes in humans:Test-retest reliability, electrode montage and noise exposure

    Get PDF
    The auditory brainstem response (ABR) is a sub-cortical evoked potential in which a series of well-defined waves occur in the first 10 ms after the onset of an auditory stimulus. Wave V of the ABR, particularly wave V latency, has been shown to be remarkably stable over time in individual listeners. However, little attention has been paid to the reliability of wave I which reflects auditory nerve activity. This ABR component has attracted interest recently, as wave I amplitude has been identified as a possible non-invasive measure of noise-induced cochlear synaptopathy. The current study aimed to determine whether ABR wave I amplitude has sufficient test-retest reliability to detect impaired auditory nerve function in an otherwise normal-hearing listener. Thirty normal-hearing females were tested, divided into equal groups of low- and high-noise exposure. The stimulus was an 80 dB nHL click. ABR recordings were made from the ipsilateral mastoid and from the ear canal (using a tiptrode). Although there was some variability between listeners, wave I amplitude had high test-retest reliability, with an intraclass correlation coefficient (ICC) comparable to that for wave V amplitude. There were slight gains in reliability for wave I amplitude when recording from the ear canal (ICC of 0.88) compared to the mastoid (ICC of 0.85). The summating potential (SP) and ratio of SP to wave I were also quantified and found to be much less reliable than measures of wave I and V amplitude. Finally, we found no significant differences in the amplitude of any wave components between low- and high-noise exposure groups. We conclude that, if the other sources of between-subject variability can be controlled, wave I amplitude is sufficiently reliable to accurately characterize individual differences in auditory nerve function

    Hallucinations in Hearing Impairment:How Informed Are Clinicians?

    Get PDF
    Background and Hypothesis: Patients with hearing impairment (HI) may experience hearing sounds without external sources, ranging from random meaningless noises (tinnitus) to music and other auditory hallucinations (AHs) with meaningful qualities. To ensure appropriate assessment and management, clinicians need to be aware of these phenomena. However, sensory impairment studies have shown that such clinical awareness is low.Study Design: An online survey was conducted investigating awareness of AHs among clinicians and their opinions about these hallucinations.Study Results: In total, 125 clinicians (68.8% audiologists; 18.4% Ear-Nose-Throat [ENT] specialists) across 10 countries participated in the survey. The majority (96.8%) was at least slightly aware of AHs in HI. About 69.6% of participants reported encountering patients with AHs less than once every 6 months in their clinic. Awareness was significantly associated with clinicians’ belief that patients feel anxious about their hallucinations (β = .018, t(118) = 2.47, P < .01), their belief that clinicians should be more aware of these hallucinations (β =.018, t(118) = 2.60, P < .01), and with confidence of clinicians in their skills to assess them (β = .017, t(118) = 2.63, P < .01). Clinicians felt underequipped to treat AHs (Median = 31; U = 1838; PFDRadj < .01).Conclusions: Awareness of AHs among the surveyed clinicians was high. Yet, the low frequency of encounters with hallucinating patients and their belief in music as the most commonly perceived sound suggest unreported cases. Clinicians in this study expressed a lack of confidence regarding the assessment and treatment of AHs and welcome more information
    • …
    corecore