8 research outputs found

    Benefit of temporal fine structure to speech perception in noise measured with controlled temporal envelopes

    Get PDF
    Previous studies have assessed the importance of temporal fine structure (TFS) for speech perception in noise by comparing the performance of normal-hearing listeners in two conditions. In one condition, the stimuli have useful information in both their temporal envelopes and their TFS. In the other condition, stimuli are vocoded and contain useful information only in their temporal envelopes. However, these studies have confounded differences in TFS with differences in the temporal envelope. The present study manipulated the analytic signal of stimuli to preserve the temporal envelope between conditions with different TFS. The inclusion of informative TFS improved speech reception thresholds for sentences presented in steady and modulated noise, demonstrating that there are significant benefits of including informative TFS even when the temporal envelope is controlled. It is likely that the results of previous studies largely reflect the benefits of TFS, rather than uncontrolled effects of changes in the temporal envelope

    Preoperative brain imaging using functional near-infrared spectroscopy helps predict cochlear implant outcome in deaf adults

    Get PDF
    Currently it is not possible to accurately predict how well a deaf individual will be able to understand speech when hearing is (re)introduced via a cochlear implant. Differences in brain organisation following deafness are thought to contribute to variability in speech understanding with a cochlear implant and may offer unique insights that could help to more reliably predict outcomes. An emerging optical neuroimaging technique, functional near-infrared spectroscopy (fNIRS), was used to determine whether a preoperative measure of brain activation could explain variability in CI outcomes and offer additional prognostic value above that provided by known clinical characteristics. Cross-modal activation to visual speech was measured in bilateral superior temporal cortex of profoundly deaf adults before cochlear implantation. Behavioural measures of auditory speech understanding were obtained in the same individuals following six months of cochlear-implant use. The results showed that stronger preoperative cross-modal activation of auditory brain regions by visual speech was predictive of poorer auditory speech understanding after implantation. Further investigation suggested that this relationship may have been driven primarily by group differences between pre- and post-lingually deaf individuals. Nonetheless, preoperative cortical imaging provided additional prognostic value above that of influential clinical characteristics, including the age-at-onset and duration of auditory deprivation, suggesting that objectively assessing the physiological status of the brain using fNIRS imaging preoperatively may support more accurate prediction of individual CI outcomes. Whilst activation of auditory brain regions by visual speech prior to implantation was related to the CI user’s clinical history of deafness, activation to visual speech did not relate to the future ability of these brain regions to respond to auditory speech stimulation with a CI. Greater preoperative activation of left superior temporal cortex by visual speech was associated with enhanced speechreading abilities, suggesting that visual-speech processing may help to maintain left temporal-lobe specialisation for language processing during periods of profound deafness

    Exploring listening-related fatigue in children with and without hearing loss using self-report and parent-proxy measures

    Get PDF
    Children with hearing loss appear to experience greater fatigue than children with normal hearing (CNH). Listening-related fatigue is often associated with an increase in effortful listening or difficulty in listening situations. This has been observed in children with bilateral hearing loss (CBHL) and, more recently, in children with unilateral hearing loss (CUHL). Available tools for measuring fatigue in children include general fatigue questionnaires such as the child self-report and parent-proxy versions of the PedsQLTM-Multidimensional Fatigue Scale (MFS) and the PROMIS Fatigue Scale. Recently, the Vanderbilt Fatigue Scale (VFS-C: child self-report; VFS-P: parent-proxy report) was introduced with a specific focus on listening-related fatigue. The aims of this study were to compare fatigue levels experienced by CNH, CUHL and CBHL using both generic and listening-specific fatigue measures and compare outcomes from the child self-report and parent-proxy reports. Eighty children aged 6–16 years (32 CNH, 19 CUHL, 29 CBHL), and ninety-nine parents/guardians (39 parents to CNH, 23 parents to CUHL, 37 parents to CBHL), completed the above fatigue questionnaires online. Kruskal-Wallis H tests were performed to compare fatigue levels between the CNH, CUHL and CBHL. To determine the agreement between parent-proxy and child self-report measures, Bland-Altman 95% limits of agreement were performed. All child self-report fatigue measures indicated that CBHL experience greater fatigue than CNH. Only the listening-specific tool (VFS-C) was sufficiently able to show greater fatigue in CUHL than in CNH. Similarly, all parent-proxy measures of fatigue indicated that CBHL experience significantly greater fatigue than CNH. The VFS-P and the PROMIS Fatigue Parent-Proxy also showed greater fatigue in CUHL than in CNH. Agreement between the parent-proxy and child self-report measures were found within the PedsQL-MFS and the PROMIS Fatigue Scale. Our results suggest that CBHL experience greater levels of daily-life fatigue compared to CNH. CUHL also appear to experience more fatigue than CNH, and listening-specific measures of fatigue may be better able to detect this effect. Further research is needed to understand the bases of fatigue in these populations and to clarify whether fatigue experienced by CBHL and CUHL is comparable in nature and degree

    EEG activity evoked in preparation for multi-talker listening by adults and children

    Get PDF
    Selective attention is critical for successful speech perception because speech is often encountered in the presence of other sounds, including the voices of competing talkers. Faced with the need to attend selectively, listeners perceive speech more accurately when they know characteristics of upcoming talkers before they begin to speak. However, the neural processes that underlie the preparation of selective attention for voices are not fully understood. The current experiments used electroencephalography (EEG) to investigate the time course of brain activity during preparation for an upcoming talker in young adults aged 18-27 years with normal hearing (Experiments 1 and 2) and in typically-developing children aged 7-13 years (Experiment 3). Participants reported key words spoken by a target talker when an opposite-gender distractor talker spoke simultaneously. The two talkers were presented from different spatial locations (±30° azimuth). Before the talkers began to speak, a visual cue indicated either the location (left/right) or the gender (male/female) of the target talker. Adults evoked preparatory EEG activity that started shortly after (<50 ms) the visual cue was presented and was sustained until the talkers began to speak. The location cue evoked similar preparatory activity in Experiments 1 and 2 with different samples of participants. The gender cue did not evoke preparatory activity when it predicted gender only (Experiment 1) but did evoke preparatory activity when it predicted the identity of a specific talker with greater certainty (Experiment 2). Location cues evoked significant preparatory EEG activity in children but gender cues did not. The results provide converging evidence that listeners evoke consistent preparatory brain activity for selecting a talker by their location (regardless of their gender or identity), but not by their gender alone

    Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2000 ms

    Get PDF
    Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1000, and 2000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker’s spatial location or their gender. Participants directed attention to location and gender simultaneously (‘objects’) at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities

    The Bases of Difficulties in Spatial Hearing for Speech: Investigations using Psychoacoustic Techniques and Magneto-encephalography

    Get PDF
    The experiments reported in this thesis investigated the bases of the difficulties that older adults report when trying to listen to what one person is saying when many other people are speaking at the same time. Experiments 1–4 examined the roles of voluntary and involuntary attention in a spatial listening task for speech among young normally-hearing listeners. When talkers started speaking one at a time, listeners could hear out a target phrase that was less intense than overlapping masker phrases. When talkers started speaking in pairs, listeners could attend to a less intense target phrase only when told in advance who to listen for, where they would speak from, or when they would speak. The distracting effect of the onset of a competing talker was effective over a broad time window. Experiment 5 investigated the relationships between performance on the spatial listening task and several predictors of performance among young and older normally-hearing adults. Poorer performance was related to self-reported difficulties with listening in everyday situations, poorer hearing sensitivity, and poorer performance on visual and auditory tasks of attention requiring fast speed of processing. Experiment 6 examined brain activity associated with successful performance on the spatial listening task using magneto-encephalography. Differences in cortical activity were identified at moments when attention had to be sustained on the target phrase, or when listeners had to resist distraction from the onset of a new masker phrase. Amplitudes, and/or latencies, of differences in brain activity arising in regions associated with attentional processes were related to performance. The results suggest that skills in attention contribute to the ability to listen successfully in multi-talker environments. Age-related difficulties with listening in those environments may arise due to a specific reduction in the ability to resist distraction or a general reduction in the speed at which information can be processed

    Spatiotemporal reconstruction of the auditory steady-state response to frequency modulation using magnetoencephalography

    No full text
    The aim of this study was to investigate the mechanisms involved in the perception of perceptually salient frequency modulation (FM) using auditory steady-state responses (ASSRs) measured with magnetoencephalography (MEG). Previous MEG studies using frequency-modulated amplitude modulation as stimuli (Luo et al., 2006, 2007) suggested that a phase modulation encoding mechanism exists for low (< 5 Hz) FM modulation frequencies but additional amplitude modulation encoding is required for faster FM modulation frequencies. In this study single-cycle sinusoidal FM stimuli were used to generate the ASSR. The stimulus was either an unmodulated 1-kHz sinusoid or a 1-kHz sinusoid that was frequency-modulated with a repetition rate of 4, 8, or 12 Hz. The fast Fourier transform (FFT) of each MEG channel was calculated to obtain the phase and magnitude of the ASSR in sensor-space and multivariate Hotelling's T2 statistics were used to determine the statistical significance of ASSRs. MEG beamformer analyses were used to localise the ASSR sources. Virtual electrode analyses were used to reconstruct the time series at each source. FFTs of the virtual electrode time series were calculated to obtain the amplitude and phase characteristics of each source identified in the beamforming analyses. Multivariate Hotelling's T2 statistics were used to determine the statistical significance of these reconstructed ASSRs. The results suggest that the ability of auditory cortex to phase-lock to FM is dependent on the FM pulse rate and that the ASSR to FM is lateralised to the right hemisphere
    corecore