21 research outputs found

    Data_Sheet_1_Selective attention decoding in bimodal cochlear implant users.pdf

    No full text
    The growing group of cochlear implant (CI) users includes subjects with preserved acoustic hearing on the opposite side to the CI. The use of both listening sides results in improved speech perception in comparison to listening with one side alone. However, large variability in the measured benefit is observed. It is possible that this variability is associated with the integration of speech across electric and acoustic stimulation modalities. However, there is a lack of established methods to assess speech integration between electric and acoustic stimulation and consequently to adequately program the devices. Moreover, existing methods do not provide information about the underlying physiological mechanisms of this integration or are based on simple stimuli that are difficult to relate to speech integration. Electroencephalography (EEG) to continuous speech is promising as an objective measure of speech perception, however, its application in CIs is challenging because it is influenced by the electrical artifact introduced by these devices. For this reason, the main goal of this work is to investigate a possible electrophysiological measure of speech integration between electric and acoustic stimulation in bimodal CI users. For this purpose, a selective attention decoding paradigm has been designed and validated in bimodal CI users. The current study included behavioral and electrophysiological measures. The behavioral measure consisted of a speech understanding test, where subjects repeated words to a target speaker in the presence of a competing voice listening with the CI side (CIS) only, with the acoustic side (AS) only or with both listening sides (CIS+AS). Electrophysiological measures included cortical auditory evoked potentials (CAEPs) and selective attention decoding through EEG. CAEPs were recorded to broadband stimuli to confirm the feasibility to record cortical responses with CIS only, AS only, and CIS+AS listening modes. In the selective attention decoding paradigm a co-located target and a competing speech stream were presented to the subjects using the three listening modes (CIS only, AS only, and CIS+AS). The main hypothesis of the current study is that selective attention can be decoded in CI users despite the presence of CI electrical artifact. If selective attention decoding improves combining electric and acoustic stimulation with respect to electric stimulation alone, the hypothesis can be confirmed. No significant difference in behavioral speech understanding performance when listening with CIS+AS and AS only was found, mainly due to the ceiling effect observed with these two listening modes. The main finding of the current study is the possibility to decode selective attention in CI users even if continuous artifact is present. Moreover, an amplitude reduction of the forward transfer response function (TRF) of selective attention decoding was observed when listening with CIS+AS compared to AS only. Further studies to validate selective attention decoding as an electrophysiological measure of electric acoustic speech integration are required.</p

    Correlation scatter plots of raw experimental data of all 14 CI users (with subject IDs) plotted versus measured SRTs.

    No full text
    <p>Panel (a) average FWHM of electrical field spatial spread, (b) “auditory performance” determined from anamnesis data using the phenomenological model of [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0193842#pone.0193842.ref016" target="_blank">16</a>] and [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0193842#pone.0193842.ref015" target="_blank">15</a>], (c) Text-reception-threshold data (% text coverage). Panel d) shows SRT-predictions of the generalized linear model (GLM).</p

    The effects of electrical field spatial spread and some cognitive factors on speech-in-noise performance of individual cochlear implant users—A computer model study

    No full text
    <div><p>The relation of the individual speech-in-noise performance differences in cochlear implant (CI) users to underlying physiological factors is currently poorly understood. This study approached this research question by a step-wise individualization of a computer model of speech intelligibility mimicking the details of CI signal processing and some details of the physiology present in CI users. Two factors, the electrical field spatial spread and internal noise (as a coarse model of the individual cognitive performance) were incorporated. Internal representations of speech-in-noise mixtures calculated by the model were classified using an automatic speech recognizer backend employing Hidden Markov Models with a Gaussian probability distribution. One-dimensional electric field spatial spread functions were inferred from electrical field imaging data of 14 CI users. Simplified assumptions of homogenously distributed auditory nerve fibers along the cochlear array and equal distance between electrode array and nerve tissue were assumed in the model. Internal noise, whose standard deviation was adjusted based on either anamnesis data, or text-reception-threshold data, or a combination thereof, was applied to the internal representations before classification. A systematic model evaluation showed that predicted speech-reception-thresholds (SRTs) in stationary noise improved (decreased) with decreasing internal noise standard deviation and with narrower electric field spatial spreads. The model version that was individualized to actual listeners using internal noise alone (containing average spatial spread) showed significant correlations to measured SRTs, reflecting the high correlation of the text-reception threshold data with SRTs. However, neither individualization to spatial spread functions alone, nor a combined individualization based on spatial spread functions and internal noise standard deviation did produce significant correlations with measured SRTs.</p></div

    Prediction of SRTs with the speech intelligibility model (a) as a function of electrical field spatial spread with constant internal noise standard deviation σ<sub>int</sub> = 0.19 and (b) as a function of σ<sub>int</sub> with constant electrical field spatial spread (λ = 9 mm).

    No full text
    <p>Prediction of SRTs with the speech intelligibility model (a) as a function of electrical field spatial spread with constant internal noise standard deviation σ<sub>int</sub> = 0.19 and (b) as a function of σ<sub>int</sub> with constant electrical field spatial spread (λ = 9 mm).</p

    Sketch of the physiologically-inspired computer model used for the speech intelligibility predictions.

    No full text
    <p>The FADE speech recognizer serves as backend, whereas the other blocks up to “internal representation” serve as the model front-end. “Internal noise” is multiplied independently on each place-time bin of the internal representation prior to entering the FADE speech recognizer.</p

    Demographic information about the participants of this study.

    No full text
    <p>Demographic information about the participants of this study.</p

    Difference in HSM performance between the first and the second session.

    No full text
    <p>A repeated measures of ANOVA within factors strategy (F120, Phantom) and interaction time (Session1, Session2) revealed a significant effect for the interaction time and strategy [F(1.00) = 6.476; p = 0.029].</p
    corecore