499 research outputs found

    Application of Digital Signal Processor for the Nucleus 22 Channel Cochlear Implant System

    Get PDF
    Nucleus 22 channel cochlear implant system extracts features with an analog electric circuit. We replaced analog with digital processing and devised an acoustic simulator to evaluate the system. Our system consists of three parts, a DSP (Digital Signal Processor) board, a BCG (Burst Code Generator) and an acoustic simulator. The DSP board is not only a replacement of the analog circuit with a digital signal processor TMS32010, but provides also many other possibilities of advanced processing algorithms. The BCG was realized a fully compatible interface with the conventional implant system, so the implanted receiver-electrode units can be arbitrarily controlled from the DSP. The acoustic simulator represents the psychological effects for the subject wearing the implant system which excites the characteristic frequency resonator by the stimulus pulse for each channel. The design of our system is described in this paper

    Feedback Analysis in Percutaneous Bone-Conduction Device and Bone-Conduction Implant on a Dry Cranium

    Get PDF
    Hypothesis: The bone-conduction implant (BCI) can use a higher gain setting without having feedback problems compared with a percutaneous bone-conduction device (PBCD). Background: The conventional PBCD, today, is a common treatment for patients with conductive hearing loss and single-sided deafness. However, there are minor drawbacks reported related to the percutaneous implant and specifically poor high-frequency gain. The BCI system is designed as an alternative to the percutaneous system because it leaves the skin intact and is less prone to fall into feedback oscillations, thus allowing more high-frequency gain. Methods: Loop gains of the Baha Classic 300 and the BCI were measured in the frequency range of 100 to 10,000 Hz attached to a Skull simulator and a dry cranium. The Baha and the BCI positions were investigated. The devices were adjusted to full-on gain. Results: It was found that the gain headroom using the BCI was generally 0 to 10 dB better at higher frequencies than using the Baha for a given mechanical output. More specifically, if the mechanical output of the devices were normalized at the cochlear level the improvement in gain headroom with the BCI versus the Baha were in the range of 10 to 30 dB. Conclusion: Using a BCI, significantly higher gain setting can be used without feedback problems as compared with using a PBCD

    Improvement of Speech Perception for Hearing-Impaired Listeners

    Get PDF
    Hearing impairment is becoming a prevalent health problem affecting 5% of world adult populations. Hearing aids and cochlear implant already play an essential role in helping patients over decades, but there are still several open problems that prevent them from providing the maximum benefits. Financial and discomfort reasons lead to only one of four patients choose to use hearing aids; Cochlear implant users always have trouble in understanding speech in a noisy environment. In this dissertation, we addressed the hearing aids limitations by proposing a new hearing aid signal processing system named Open-source Self-fitting Hearing Aids System (OS SF hearing aids). The proposed hearing aids system adopted the state-of-art digital signal processing technologies, combined with accurate hearing assessment and machine learning based self-fitting algorithm to further improve the speech perception and comfort for hearing aids users. Informal testing with hearing-impaired listeners showed that the testing results from the proposed system had less than 10 dB (by average) difference when compared with those results obtained from clinical audiometer. In addition, Sixteen-channel filter banks with adaptive differential microphone array provides up to six-dB SNR improvement in the noisy environment. Machine-learning based self-fitting algorithm provides more suitable hearing aids settings. To maximize cochlear implant users’ speech understanding in noise, the sequential (S) and parallel (P) coding strategies were proposed by integrating high-rate desynchronized pulse trains (DPT) in the continuous interleaved sampling (CIS) strategy. Ten participants with severe hearing loss participated in the two rounds cochlear implants testing. The testing results showed CIS-DPT-S strategy significantly improved (11%) the speech perception in background noise, while the CIS-DPT-P strategy had a significant improvement in both quiet (7%) and noisy (9%) environment

    Adaptation by normal listeners to upward spectral shifts of speech: Implications for cochlear implants

    Get PDF
    Multi-channel cochlear implants typically present spectral information to the wrong ''place'' in the auditory nerve array, because electrodes can only be inserted partway into the cochlea. Although such spectral shifts are known to cause large immediate decrements in performance in simulations, the extent to which listeners can adapt to such shifts has yet to be investigated. Here, the effects of a four-channel implant in normal listeners have been simulated, and performance tested with unshifted spectral information and with the equivalent of a 6.5-mm basalward shift on the basilar membrane (1.3-2.9 octaves, depending on frequency). As expected, the unshifted simulation led to relatively high levels of mean performance (e;g., 64% of words in sentences correctly identified) whereas the shifted simulation led to very poor results (e.g., 1% of words). However, after just nine 20-min sessions of connected discourse tracking with the shifted simulation, performance improved significantly for the identification of intervocalic consonants, medial vowels in monosyllables, and words in sentences (30% of words). Also, listeners were able to track connected discourse of shifted signals without lipreading at rates up to 40 words per minute. Although we do not know if complete adaptation to the shifted signals is possible, it is clear that short-term experiments seriously exaggerate the long-term consequences of such spectral shifts. (C) 1999 Acoustical Society of America. [S0001-4966(99)02012-3]

    A Low-Power DSP Architecture for a Fully Implantable Cochlear Implant System-on-a-Chip.

    Full text link
    The National Science Foundation Wireless Integrated Microsystems (WIMS) Engineering Research Center at the University of Michigan developed Systems-on-a-Chip to achieve biomedical implant and environmental monitoring functionality in low-milliwatt power consumption and 1-2 cm3 volume. The focus of this work is implantable electronics for cochlear implants (CIs), surgically implanted devices that utilize existing nerve connections between the brain and inner-ear in cases where degradation of the sensory hair cells in the cochlea has occurred. In the absence of functioning hair cells, a CI processes sound information and stimulates the nderlying nerve cells with currents from implanted electrodes, enabling the patient to understand speech. As the brain of the WIMS CI, the WIMS microcontroller unit (MCU) delivers the communication, signal processing, and storage capabilities required to satisfy the aggressive goals set forth. The 16-bit MCU implements a custom instruction set architecture focusing on power-efficient execution by providing separate data and address register windows, multi-word arithmetic, eight addressing modes, and interrupt and subroutine support. Along with 32KB of on-chip SRAM, a low-power 512-byte scratchpad memory is utilized by the WIMS custom compiler to obtain an average of 18% energy savings across benchmarks. A synthesizable dynamic frequency scaling circuit allows the chip to select a precision on-chip LC or ring oscillator, and perform clock scaling to minimize power dissipation; it provides glitch-free, software-controlled frequency shifting in 100ns, and dissipates only 480μW. A highly flexible and expandable 16-channel Continuous Interleaved Sampling Digital Signal Processor (DSP) is included as an MCU peripheral component. Modes are included to process data, stimulate through electrodes, and allow experimental stimulation or processing. The entire WIMS MCU occupies 9.18mm2 and consumes only 1.79mW from 1.2V in DSP mode. This is the lowest reported consumption for a cochlear DSP. Design methodologies were analyzed and a new top-down design flow is presented that encourages hardware and software co-design as well as cross-domain verification early in the design process. An O(n) technique for energy-per-instruction estimations both pre- and post-silicon is presented that achieves less than 4% error across benchmarks. This dissertation advances low-power system design while providing an improvement in hearing recovery devices.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91488/1/emarsman_1.pd

    Technology for Hearing Evaluation

    Get PDF

    Hearing Aids

    Get PDF
    This chapter presents an overview of the current state of a hearing aid tracing back through the history. The hearing aid, which was just a sound collector in the sixteenth century, has continued to develop until the current digital hearing aid for realizing the downsizing and digital signal processing, and this is the age of implanted hearing devices. However, currently popular implanted hearing devices are a fairly large burden for people soon after they become aware of their hearing loss, although auditory stimulation to the nerve in the early stage can avoid accelerated cognitive decline and an increased risk of incident all-cause dementia. For this reason, we tend to stick to wearable hearing aids that are easy to be put on and take off. Although the digital hearing aid has already reached the technical ceiling, the noninvasive hearing aids have some severe problems that are yet to be resolved. In the second half of this chapter, we discuss the scientific and technical solutions to broaden the range of permissible users of hearing aids

    Close Copy Speech Synthesis for Speech Perception Testing

    Get PDF
    The present study is concerned with developing a speech synthesis subcomponent for perception testing in the context of evaluating cochlear implants in children. We provide a detailed requirements analysis, and develop a strategy for maximally high quality speech synthesis using Close Copy Speech synthesis techniques with a diphone based speech synthesiser, MBROLA. The close copy concept used in this work defines close copy as a function from a pair of speech signal recording and a phonemic annotation aligned with the recording into the pronunciation specification interface of the speech synthesiser. The design procedure has three phases: Manual Close Copy Speech (MCCS) synthesis as a ?best case gold standard?, in which the function is implemented manually as a preliminary step; Automatic Close Copy Speech (ACCS) synthesis, in which the steps taken in manual transformation are emulated by software; finally, Parametric Close Copy Speech (PCCS) synthesis, in which prosodic parameters are modifiable while retaining the diphones. This contribution reports on the MCCS and ACCS synthesis phases
    corecore