390 research outputs found

    Auditory stream segregation of amplitude-modulated narrowband noise in cochlear implant users and individuals with normal hearing

    Get PDF
    Voluntary stream segregation was investigated in cochlear implant (CI) users and normal-hearing (NH) listeners using a segregation-promoting objective approach which evaluated the role of spectral and amplitude-modulation (AM) rate separations on stream segregation and its build-up. Sequences of 9 or 3 pairs of A and B narrowband noise (NBN) bursts were presented which differed in either center frequency of the noise band, the AM-rate, or both. In some sequences (delayed sequences), the last B burst was delayed by 35 ms from their otherwise-steady temporal position. In the other sequences (no-delay sequences), the last B bursts were temporally advanced from 0 to 10 ms. A single interval yes/no procedure was utilized to measure participants’ sensitivity (d\u27) in identifying delayed vs. no-delay sequences. A higher d\u27 value showed the higher ability to segregate the A and B subsequences. For NH listeners, performance improved with each spectral separation. However, for CI users, performance was only significantly better for the condition with the largest spectral separation. Additionally, performance was significantly poorer for the largest AM-rate separation than for the condition with no AM-rate separation for both groups. The significant effect of sequence duration in both groups indicated that listeners made more improvement with lengthening the duration of stimulus sequences, supporting the build-up effect. The results of this study suggest that CI users are less able than NH listeners to segregate NBN bursts into different auditory streams when they are moderately separated in the spectral domain. Contrary to our hypothesis, our results indicate that AM-rate separation may interfere with the segregation of streams of NBN. Additionally, our results add evidence to the literature that CI users build up stream segregation at a rate comparable to NH listeners, when the inter-stream spectral separations are adequately large

    Neural Correlates of Binaural Interaction Using Aggregate-System Stimulation in Cochlear Implantees

    Get PDF
    The importance of binaural cues in auditory stream formation and sound source differentiation is widely accepted. When treating one ear with a cochlear implant (CI) the peripheral auditory system gets partially replaced and processing delays get added potentially, thus important interaural time encoding gets altered. This is a crucial problem because factors like the interaural time delay between the receiving ears are known to be responsible for facilitating such cues, e.g., sound source localization and separation. However, these effects are not fully understood, leaving a lack of systematic binaural fitting strategies with respect to an optimal binaural fusion. To gain new insights into such alterations, we suggest a novel method of free-field evoked auditory brainstem response (ABR) analysis in CI users. As a result, this method does not bypass the technically induced intrinsic delays of the hearing device while leaving the complete electrode array active, thus the most natural way of stimulation is provided and the comparable testing of real world stimuli gets facilitated. Unfortunately, ABRs acquired in CI users are additionally affected by the prominent artifact caused by their electrical stimulation, which severely distorts the desired neural response, thus challenging their analysis. To circumvent this problem, we further introduce a novel narrowband filtering CI artifact removal technique capable of obtaining neural correlates of ABRs in CI users. Consequently, we were able to compare brainstem-level responses collected of 12 CI users and 12 normal hearing listeners using two different stimuli (i.e., chirp and click) at four different intensities each, what comprises an adaption of the prominent brainstem evoked response audiometry serving as an additional evaluation criterion. We analyzed the responses using the average of 2,000 trials in combination with synchronized regularizations across them and found consistent results in their deflections and latencies, as well as in single trial relationships between both groups. This method provides a novel and unique perspective into the natural CI users’ brainstem-level responses and can be practical in future research regarding binaural interaction and fusion. Furthermore, the binaural interaction component (BIC), i.e., the arithmetical difference between the sum of both monaurally evoked ABRs and the binaurally evoked ABR, has been previously shown to be an objective indicator for binaural interaction. This component is unfortunately known to be rather fragile and as a result, a reliable, objective measure of binaural interaction in CI users does not exist to the present date. It is most likely that implantees would benefit from a reliable analysis of brainstem-level and subsequent higher-level binaural interaction, since this could objectively support fitting strategies with respect to a maximization of interaural integration. Therefore, we introduce a novel method capable of obtaining neural correlates of binaural interaction in bimodal CI users by combining recent advances in the field of fast, deconvolution-based ABR acquisitions with the introduced narrowband filtering technique. The proposed method shows a significant improvement in the magnitude of resulting BICs in 10 bimodal CI users and a control-group of 10 normal hearing subjects when compensating the interaural latency difference caused by the technical devices. In total, both proposed studies objectively demonstrate technical-driven interaural latency mismatches. Thus, they strongly emphasize potential benefits when balancing these interaural delays to improve binaural processing by significant increases in associated neural correlates of successful binaural interaction. These results and also the estimated latency differences should be investigated in larger group sizes to further consolidate the results, but confirm the demand for rather binaural solutions than treating hearing losses in an isolated monaural manner.Zusammenfassung Die Notwendigkeit binauraler Verarbeitungsprozesse in der auditorischen Wahrnehmung ist weitestgehend akzeptiert. Bei der Therapie eines Ohres mit einem Cochlea-Implantat (engl. cochlear implant (CI)) wird das periphere auditorische System teilweise ersetzt und verändert, sodass natürliche, interaurale Zeitauflösungen beeinflusst werden. Dieses Problem ist entscheidend, denn Faktoren wie interaurale Laufzeitunterschiede zwischen den aufnehmenden Ohren sind verantwortlich für die Umsetzung der erwähnten binauralen Verarbeitungsprozesse, z.B. Schallquellenlokalisation und -separation. Allerdings sind diese Effekte nicht ausreichend verstanden, weshalb bis heute binaurale Anpassstrategien mit Rücksicht auf eine optimale Fusionierung fehlen. Um neue Einsichten in solche zeitlichen Verzerrungen zu erhalten, schlagen wir ein neues Verfahren der Freifeld evozierten auditorischen Hirnstammpotentiale (engl. auditory brainstem response (ABR)) in CI-Nutzern vor. Diese Methode beinhaltet explizit technisch-induzierte Laufzeiten verwendeter Hörhilfen, sodass eine natürliche Stimulation unter Verwendung von realitätsnahen Stimuli ermöglicht wird. Unglücklicherweise sind ABRs von CI-Nutzern zusätzlich mit Stimulationsartefakten belastet, wodurch benötigte neurale Antworten weiter verzerrt werden und eine entsprechende Analyse der Signale deutlich erschwert wird. Um dieses Problem zu umgehen, schlagen wir eine neue Artefakt- Reduktionstechnik vor, welche auf spektraler Schmalbandfilterung basiert und somit den Erhalt entsprechender, neuraler ABR Korrelate ermöglicht. Diese Methoden erlaubten die Interpretation neuraler Korrelate auf Hirnstammebene unter Verwendung von zwei verschiedenen Stimuli (Chirps und Klicks) unter vier verschiedenen Lautstärken in 12 CI-Nutzern und 12 normalhörenden Probanden. Die beschriebene Prozedur adaptiert somit die weitläufig bekannte Hirnstammaudiometrie (engl. brainstem evoked response audiometry (BERA)), deren Ergebnisse zur zusätzlichen Evaluation des vorgestellten Verfahrens dienten. Die Untersuchung der aus 2000 Einzelantworten erhaltenen Mittelwerte in Kombination mit der Analyse synchronisierter Regularitäten über den Verlauf der Einzelantworten ergab dabei konsistente Beobachtungen in gefundenen Amplituden, Latenzen sowie in Abhängigkeiten zwischen Einzelantworten in beiden Gruppen. Das vorgestellte Verfahren erlaubt somit auf einzigartige Weise neue und ungesehene Einsichten in natürliche, neurale Antworten auf Hirnstammebene von CI-Nutzern, welche in zukünftigen Studien verwendet werden können, um binaurale Interaktionen und Fusionen weiter untersuchen zu können. Interessanterweise hat sich, die auf ABRs basierende, binaurale Interaktionskomponente (engl. binaural interaction component (BIC)) als objektiver Indikator binauraler Integration etabliert. Diese Komponente (d.h. die arithmetische Differenz zwischen der Summe der monauralen Antworten und der binauralen Antwort) ist leider sehr fragil, wodurch ein sicherer und objektiver Nachweis in CI-Nutzern bis heute nicht existiert. Dabei ist es sehr wahrscheinlich, dass gerade Implantatsträger von einer entsprechenden Analyse auf Hirnstammebene und höherrangigen Ebenen deutlich profitieren würden, da dies objektiv Anpassstrategien mit Rücksicht auf eine bestmögliche binaurale Integration ermöglichen könnte. Deshalb stellen wir ein weiteres, neuartiges Verfahren zum Erhalt von neuralen Korrelaten binauraler Interaktion in bimodal versorgten CI-Trägern vor, welches jüngste Erfolge im Bereich der schnellen, entfalltungsbasierten ABR Akquisition und der bereits vorgestellten Schmalband- filterung zur Reduktion von Stimulationsartefakten kombiniert. Basierend auf diesem Verfahren konnten signifikante Verbesserungen in der BIC-Amplitude in 10 bimodal versorgten Patienten sowie 10 normalhörenden Probanden, basierend auf umgesetzte, interaurale Laufzeitkompensationen technischer Hörhilfen, aufgezeigt werden. Insgesamt demonstrieren beide vorgestellten Studien technisch-induzierte, interaurale Laufzeitunterschiede und betonen demnach sehr deutlich potenzielle Vorteile in assoziierten neuralen Korrelaten binauraler Interaktionen, wenn solche Missverhältnisse zeitlich ausgeglichen werden. Die aufgezeigten Ergebnisse sowie die getätigte Abschätzungen technischer Laufzeiten sollte in größeren Gruppen weiter untersucht werden, um die Aussagekraft weiter zu steigern. Dennoch unterstreichen diese Einsichten das Verlangen nach binauralen Lösungsansätzen in der zukünftigen Hörrehabilitation, statt bisheriger isolierter und monauraler Therapien

    Selective attention and speech processing in the cortex

    Full text link
    In noisy and complex environments, human listeners must segregate the mixture of sound sources arriving at their ears and selectively attend a single source, thereby solving a computationally difficult problem called the cocktail party problem. However, the neural mechanisms underlying these computations are still largely a mystery. Oscillatory synchronization of neuronal activity between cortical areas is thought to provide a crucial role in facilitating information transmission between spatially separated populations of neurons, enabling the formation of functional networks. In this thesis, we seek to analyze and model the functional neuronal networks underlying attention to speech stimuli and find that the Frontal Eye Fields play a central 'hub' role in the auditory spatial attention network in a cocktail party experiment. We use magnetoencephalography (MEG) to measure neural signals with high temporal precision, while sampling from the whole cortex. However, several methodological issues arise when undertaking functional connectivity analysis with MEG data. Specifically, volume conduction of electrical and magnetic fields in the brain complicates interpretation of results. We compare several approaches through simulations, and analyze the trade-offs among various measures of neural phase-locking in the presence of volume conduction. We use these insights to study functional networks in a cocktail party experiment. We then construct a linear dynamical system model of neural responses to ongoing speech. Using this model, we are able to correctly predict which of two speakers is being attended by a listener. We then apply this model to data from a task where people were attending to stories with synchronous and scrambled videos of the speakers' faces to explore how the presence of visual information modifies the underlying neuronal mechanisms of speech perception. This model allows us to probe neural processes as subjects listen to long stimuli, without the need for a trial-based experimental design. We model the neural activity with latent states, and model the neural noise spectrum and functional connectivity with multivariate autoregressive dynamics, along with impulse responses for external stimulus processing. We also develop a new regularized Expectation-Maximization (EM) algorithm to fit this model to electroencephalography (EEG) data

    Listening in the mix: lead vocals robustly attract auditory attention in popular music

    Get PDF
    Listeners can attend to and track instruments or singing voices in complex musical mixtures, even though the acoustical energy of sounds from individual instruments may overlap in time and frequency. In popular music, lead vocals are often accompanied by sound mixtures from a variety of instruments, such as drums, bass, keyboards, and guitars. However, little is known about how the perceptual organization of such musical scenes is affected by selective attention, and which acoustic features play the most important role. To investigate these questions, we explored the role of auditory attention in a realistic musical scenario. We conducted three online experiments in which participants detected single cued instruments or voices in multi-track musical mixtures. Stimuli consisted of 2-s multi-track excerpts of popular music. In one condition, the target cue preceded the mixture, allowing listeners to selectively attend to the target. In another condition, the target was presented after the mixture, requiring a more “global” mode of listening. Performance differences between these two conditions were interpreted as effects of selective attention. In Experiment 1, results showed that detection performance was generally dependent on the target’s instrument category, but listeners were more accurate when the target was presented prior to the mixture rather than the opposite. Lead vocals appeared to be nearly unaffected by this change in presentation order and achieved the highest accuracy compared with the other instruments, which suggested a particular salience of vocal signals in musical mixtures. In Experiment 2, filtering was used to avoid potential spectral masking of target sounds. Although detection accuracy increased for all instruments, a similar pattern of results was observed regarding the instrument-specific differences between presentation orders. In Experiment 3, adjusting the sound level differences between the targets reduced the effect of presentation order, but did not affect the differences between instruments. While both acoustic manipulations facilitated the detection of targets, vocal signals remained particularly salient, which suggest that the manipulated features did not contribute to vocal salience. These findings demonstrate that lead vocals serve as robust attractor points of auditory attention regardless of the manipulation of low-level acoustical cues

    Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing

    Get PDF
    corecore