188 research outputs found

    The Effect of Frequency Resolution on Intelligibility Sentence and its Relevance to Cochlear Implant Design

    Get PDF
    The purpose of this study is to understand how electrical stimulation (as opposed to acoustical stimulation) of the auditory nerve is used in cochlear implants. Speech is a complex signal that changes rapidly in time and frequency domains. Since phonemes (the smallest unit of speech that distinguishes words) depend on nuanced differences in frequency patterns, it would be expected that a signal with drastically reduced frequency information would be of limited value for conveying speech. Such a frequency-poor signal is the object to be investigated in the present work. It is also the basis of the way speech is represented in cochlear implants. How could sound in which most frequency information has been discarded by successfully used by so many thousands of individuals? There must be more information in the signal such as timing and amplitude that are important for the speech signal. In addition, semantic context and visual information play a significant role in speech intelligibility. It is the goal of this thesis is to examine how this information aggregates into the perception of speech signals limited by poor frequency resolution, such as cochlear implants. To accomplish this goal, sentence lists were created with systematically varying levels of frequency resolution. Normally, hearing listeners were asked to identify the last word of each sentence presented to them at the different levels of frequency resolution. To examine the effect of context, half of the sentences ended with predictable words and half ended with unpredictable words. The intelligibility of predictable and unpredictable words was compared at six different frequency resolutions. For this study, we used the standard R-SPIN sentences because each list was constructed to be equally intelligible with each of the other lists. The overall pattern of results showed that there were large effects of predictability and frequency resolution. There was an interaction between these two main effects that will be discussed below. Advisors: Thomas Carrell and Ashok Sama

    Temporal fine structure processing, pitch and speech perception in cochlear implant recipients

    Get PDF
    Cochlear implant (CI) recipients usually complain about poor speech understanding in the presence of noise. Indeed, they generally show ceiling effects for understanding sentences presented in quiet, but their scores decrease drastically when testing in the presence of competing noise. One important aspect that contributes to speech perception skills, especially when listening in a fluctuating background, has been described as Temporal Fine Structure (TFS) processing. TFS cues are more dominant in conveying Low Frequency (LF) signals linked in particular to Fundamental Frequency (F0), which is crucial for linguistic and musical perception. A§E Harmonic Intonation (HI) and Disharmonic Intonation (DI) are tests of pitch perception in the LF domain and their outcomes are believed to depend on the availability of TFS cues. Previous findings indicated that the DI test provided more differential LF pitch perception outcomes in that it reflected phase locking and TFS processing capacities of the ear, whereas the HI test provided information on its place coding capacity as well. Previous HI/DI studies were mainly done in adult population showing abnormal pitch perception outcomes in CI recipients and there was no or limited data in paediatric population as well as HI/DI outcomes in relation to speech perception outcomes in the presence of noise. One of the primary objectives of this thesis has been to investigate LF pitch perception skills in a group of pediatric CI recipients in comparison to normal hearing (NH) children. Another objective was to introduce a new assessment tool, the Italian STARR test which was based on measurement of speech perception using a roving-level adaptive method where the presentation level of both speech and noise signals varied across sentences. The STARR test attempts to reflect a better representation of real world listening conditions where background noise is usually present and speech intensity varies according to vocal capacity as well as the distance of the speaker. The Italian STARR outcomes in NH adults were studied to produce normative data, as well as to evaluate interlist variability and learning effects. Finally, LF pitch perception outcomes linked to availability of TFS were investigated in a group of adult CI recipients including bimodal users in relation to speech perception, in particular Italian STARR outcomes. Results were interesting: Although the majority of CI recipient children showed abnormal outcomes for A§E, their scores were considerably better than in the adult CI users. Age had a statistically significant effect on performance in both children and adults; younger children and older adults tended to show poorer performance. Similarly, CI recipient adults (even the better performers) showed abnormal STARR outcomes in comparison to NH subjects and group differences were statistically significant. The duration of profound deafness before implantation had a significant effect on STARR performance. On the other hand, the significant effect of CI thresholds re-emphasized the sensitivity of the test to lower level speech which a CI user can face very often during everyday life. Analysis revealed statistically significant correlations between HI/DI and STARR performance. Moreover, contralateral hearing aid users showed significant bimodal benefit for both HI/DI and STARR tests. Overall findings confirmed the usefulness of evaluating both LF pitch and speech perception in order to track changes in TFS sensitivity for CI recipients over time and across different listening conditions which might be provided by future technological advances as well as to study individual differences

    Neuromorphic hardware for somatosensory neuroprostheses

    Get PDF
    In individuals with sensory-motor impairments, missing limb functions can be restored using neuroprosthetic devices that directly interface with the nervous system. However, restoring the natural tactile experience through electrical neural stimulation requires complex encoding strategies. Indeed, they are presently limited in effectively conveying or restoring tactile sensations by bandwidth constraints. Neuromorphic technology, which mimics the natural behavior of neurons and synapses, holds promise for replicating the encoding of natural touch, potentially informing neurostimulation design. In this perspective, we propose that incorporating neuromorphic technologies into neuroprostheses could be an effective approach for developing more natural human-machine interfaces, potentially leading to advancements in device performance, acceptability, and embeddability. We also highlight ongoing challenges and the required actions to facilitate the future integration of these advanced technologies

    The temporal pattern of impulses in primary afferents analogously encodes touch and hearing information

    Full text link
    An open question in neuroscience is the contribution of temporal relations between individual impulses in primary afferents in conveying sensory information. We investigated this question in touch and hearing, while looking for any shared coding scheme. In both systems, we artificially induced temporally diverse afferent impulse trains and probed the evoked perceptions in human subjects using psychophysical techniques. First, we investigated whether the temporal structure of a fixed number of impulses conveys information about the magnitude of tactile intensity. We found that clustering the impulses into periodic bursts elicited graded increases of intensity as a function of burst impulse count, even though fewer afferents were recruited throughout the longer bursts. The interval between successive bursts of peripheral neural activity (the burst-gap) has been demonstrated in our lab to be the most prominent temporal feature for coding skin vibration frequency, as opposed to either spike rate or periodicity. Given the similarities between tactile and auditory systems, second, we explored the auditory system for an equivalent neural coding strategy. By using brief acoustic pulses, we showed that the burst-gap is a shared temporal code for pitch perception between the modalities. Following this evidence of parallels in temporal frequency processing, we next assessed the perceptual frequency equivalence between the two modalities using auditory and tactile pulse stimuli of simple and complex temporal features in cross-sensory frequency discrimination experiments. Identical temporal stimulation patterns in tactile and auditory afferents produced equivalent perceived frequencies, suggesting an analogous temporal frequency computation mechanism. The new insights into encoding tactile intensity through clustering of fixed charge electric pulses into bursts suggest a novel approach to convey varying contact forces to neural interface users, requiring no modulation of either stimulation current or base pulse frequency. Increasing control of the temporal patterning of pulses in cochlear implant users might improve pitch perception and speech comprehension. The perceptual correspondence between touch and hearing not only suggests the possibility of establishing cross-modal comparison standards for robust psychophysical investigations, but also supports the plausibility of cross-sensory substitution devices

    Communication Biophysics

    Get PDF
    Contains reports on six research projects.National Institutes of Health (Grant 5 PO1 NS13126)National Institutes of Health (Grant 5 RO1 NS18682)National Institutes of Health (Grant 5 RO1 NS20322)National Institutes of Health (Grant 5 R01 NS20269)National Institutes of Health (Grant 5 T32NS 07047)Symbion, Inc.National Science Foundation (Grant BNS 83-19874)National Science Foundation (Grant BNS 83-19887)National Institutes of Health (Grant 6 RO1 NS 12846)National Institutes of Health (Grant 1 RO1 NS 21322

    Neurocomputing systems for auditory processing

    Get PDF
    This thesis studies neural computation models and neuromorphic implementations of the auditory pathway with applications to cochlear implants and artificial auditory sensory and processing systems. Very low power analogue computation is addressed through the design of micropower analogue building blocks and an auditory preprocessing module targeted at cochlear implants. The analogue building blocks have been fabricated and tested in a standard Complementary Metal Oxide Silicon (CMOS) process. The auditory pre-processing module design is based on the cochlea signal processing mechanisms and low power microelectronic design methodologies. Compared to existing preprocessing techniques used in cochlear implants, the proposed design has a wider dynamic range and lower power consumption. Furthermore, it provides the phase coding as well as the place coding information that are necessary for enhanced functionality in future cochlear implants. The thesis presents neural computation based approaches to a number of signal-processing problems encountered in cochlear implants. Techniques that can improve the performance of existing devices are also presented. Neural network based models for loudness mapping and pattern recognition based channel selection strategies are described. Compared with state—of—the—art commercial cochlear implants, the thesis results show that the proposed channel selection model produces superior speech sound qualities; and the proposed loudness mapping model consumes substantially smaller amounts of memory. Aside from the applications in cochlear implants, this thesis describes a biologically plausible computational model of the auditory pathways to the superior colliculus based on current neurophysiological findings. The model encapsulates interaural time difference, interaural spectral difference, monaural pathway and auditory space map tuning in the inferior colliculus. A biologically plausible Hebbian-like learning rule is proposed for auditory space neural map tuning, and a reinforcement learning method is used for map alignment with other sensory space maps through activity independent cues. The validity of the proposed auditory pathway model has been verified by simulation using synthetic data. Further, a complete biologically inspired auditory simulation system is implemented in software. The system incorporates models of the external ear, the cochlea, as well as the proposed auditory pathway model. The proposed implementation can mimic the biological auditory sensory system to generate an auditory space map from 3—D sounds. A large amount of real 3-D sound signals including broadband White noise, click noise and speech are used in the simulation experiments. The efiect of the auditory space map developmental plasticity is examined by simulating early auditory space map formation and auditory space map alignment with a distorted visual sensory map. Detailed simulation methods, procedures and results are presented

    Benefits to speech perception in noise from the binaural integration of electric and acoustic signals in simulated unilateral deafness

    Get PDF
    Objectives: This study used vocoder simulations with normal-hearing (NH) listeners to (a) measure their ability to integrate speech information from a NH ear and a simulated cochlear implant (CI); and (b) investigate whether binaural integration is disrupted by a mismatch in the delivery of spectral information between the ears arising from a misalignment in the mapping of frequency to place. Design: Eight NH volunteers participated in the study and listened to sentences embedded in background noise via headphones. Stimuli presented to the left ear were unprocessed. Stimuli presented to the right ear (referred to as the CI-simulation ear) were processed using an 8-channel noise vocoder with one of three processing strategies. An Ideal strategy simulated a frequency-to-place map across all channels that matched the delivery of spectral information between the ears. A Realistic strategy created a misalignment in the mapping of frequency to place in the CI-simulation ear where the size of the mismatch between the ears varied across channels. Finally, a Shifted strategy imposed a similar degree of misalignment in all channels resulting in consistent mismatch between the ears across frequency. The ability to report key words in sentences was assessed under monaural and binaural listening conditions and at signal-to-noise ratios (SNRs) established by estimating speech-reception thresholds in each ear alone. The SNRs ensured that the monaural performance of the left ear never exceeded that of the CI-simulation ear. Binaural integration advantages were calculated by comparing binaural performance with monaural performance using the CI-simulation ear alone. Thus, these advantages reflected the additional use of the experimentally-constrained left ear and were not attributable to better-ear listening. Results: Binaural performance was as accurate as, or more accurate than, monaural performance with the CI-simulation ear alone. When both ears supported a similar level of monaural performance (50%), binaural integration advantages were found regardless of whether a mismatch was simulated or not. When the CI-simulation ear supported a superior level of monaural performance (71%), evidence of binaural integration was absent when a mismatch was simulated using both the Realistic and Ideal processing strategies. This absence of integration could not be accounted for by ceiling effects or by changes in SNR. Conclusions: If generalizable to unilaterally-deaf CI users, the results of the current simulation study would suggest that benefits to speech perception in noise can be obtained by integrating information from an implanted ear and a normal-hearing ear. A mismatch in the delivery of spectral information between the ears due to a misalignment in the mapping of frequency to place may disrupt binaural integration in situations where both ears cannot support a similar level of monaural speech understanding. Previous studies which have measured the speech perception of unilaterally-deaf individuals after cochlear implantation but with non-individualized frequency-to-electrode allocations may therefore have underestimated the potential benefits of providing binaural hearing. However, it remains unclear whether the size and nature of the potential incremental benefits from individualized allocations are sufficient to justify the time and resources required to derive them based on cochlear imaging or pitch-matching tasks

    Connecting the Brain to Itself through an Emulation.

    Get PDF
    Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions
    corecore