10 research outputs found

    Coding Strategies for Cochlear Implants Under Adverse Environments

    Get PDF
    Cochlear implants are electronic prosthetic devices that restores partial hearing in patients with severe to profound hearing loss. Although most coding strategies have significantly improved the perception of speech in quite listening conditions, there remains limitations on speech perception under adverse environments such as in background noise, reverberation and band-limited channels, and we propose strategies that improve the intelligibility of speech transmitted over the telephone networks, reverberated speech and speech in the presence of background noise. For telephone processed speech, we propose to examine the effects of adding low-frequency and high- frequency information to the band-limited telephone speech. Four listening conditions were designed to simulate the receiving frequency characteristics of telephone handsets. Results indicated improvement in cochlear implant and bimodal listening when telephone speech was augmented with high frequency information and therefore this study provides support for design of algorithms to extend the bandwidth towards higher frequencies. The results also indicated added benefit from hearing aids for bimodal listeners in all four types of listening conditions. Speech understanding in acoustically reverberant environments is always a difficult task for hearing impaired listeners. Reverberated sounds consists of direct sound, early reflections and late reflections. Late reflections are known to be detrimental to speech intelligibility. In this study, we propose a reverberation suppression strategy based on spectral subtraction to suppress the reverberant energies from late reflections. Results from listening tests for two reverberant conditions (RT60 = 0.3s and 1.0s) indicated significant improvement when stimuli was processed with SS strategy. The proposed strategy operates with little to no prior information on the signal and the room characteristics and therefore, can potentially be implemented in real-time CI speech processors. For speech in background noise, we propose a mechanism underlying the contribution of harmonics to the benefit of electroacoustic stimulations in cochlear implants. The proposed strategy is based on harmonic modeling and uses synthesis driven approach to synthesize the harmonics in voiced segments of speech. Based on objective measures, results indicated improvement in speech quality. This study warrants further work into development of algorithms to regenerate harmonics of voiced segments in the presence of noise

    Single-Microphone Speech Enhancement and Separation Using Deep Learning

    Get PDF

    Single-Microphone Speech Enhancement and Separation Using Deep Learning

    Get PDF
    The cocktail party problem comprises the challenging task of understanding a speech signal in a complex acoustic environment, where multiple speakers and background noise signals simultaneously interfere with the speech signal of interest. A signal processing algorithm that can effectively increase the speech intelligibility and quality of speech signals in such complicated acoustic situations is highly desirable. Especially for applications involving mobile communication devices and hearing assistive devices. Due to the re-emergence of machine learning techniques, today, known as deep learning, the challenges involved with such algorithms might be overcome. In this PhD thesis, we study and develop deep learning-based techniques for two sub-disciplines of the cocktail party problem: single-microphone speech enhancement and single-microphone multi-talker speech separation. Specifically, we conduct in-depth empirical analysis of the generalizability capability of modern deep learning-based single-microphone speech enhancement algorithms. We show that performance of such algorithms is closely linked to the training data, and good generalizability can be achieved with carefully designed training data. Furthermore, we propose uPIT, a deep learning-based algorithm for single-microphone speech separation and we report state-of-the-art results on a speaker-independent multi-talker speech separation task. Additionally, we show that uPIT works well for joint speech separation and enhancement without explicit prior knowledge about the noise type or number of speakers. Finally, we show that deep learning-based speech enhancement algorithms designed to minimize the classical short-time spectral amplitude mean squared error leads to enhanced speech signals which are essentially optimal in terms of STOI, a state-of-the-art speech intelligibility estimator.Comment: PhD Thesis. 233 page

    Noise processing in the auditory system with applications in speech enhancement

    Get PDF
    Abstract: The auditory system is extremely efficient in extracting auditory information in the presence of background noise. However, speech enhancement algorithms, aimed at removing the background noise from a degraded speech signal, are not achieving results that are near the efficacy of the auditory system. The purpose of this study is thus to first investigate how noise affects the spiking activity of neurons in the auditory system and then use the brain activity in the presence of noise to design better speech enhancement algorithms. In order to investigate how noise affects the spiking activity of neurons, we first design a generalized linear model that relates the spiking activity of neurons to intrinsic and extrinsic covariates that can affect their activity, such as noise. From this model, we extract two metrics, one that shows the effects of noise on the spiking activity and another the relative effects of vocalization compared to noise. We use these metrics to analyze neural data, recorded from a structure of the auditory system named the inferior colliculus (IC), while presenting noisy vocalizations. We studied the effect of different kinds of noises (non-stationary, white and natural stationary), different vocalizations, different input sound levels and signal-to-noise ratios (SNR). We found that the presence of non-stationary noise increases the spiking activity of neurons, regardless of the SNR, input level or vocalization type. The presence of white or natural stationary noises however causes a great diversity of responses where the activity of sites could increase, decrease or remain unchanged. This shows that the noise invariance previously reported in the IC depends on the noisy conditions, which had not been observed before. We then address the problem of speech enhancement using information from the brain's processing in the presence of noise. It has been shown before that the brain waves of a listener strongly correlates with the speaker to which the listener attends. Given this, we design two speech enhancement algorithms with a denoising autoencoder structure, namely the Brain Enhanced Speech Denoiser (BESD) and U-shaped Brain Enhanced Speech Denoiser (U-BESD). These algorithms take advantage of the attended auditory information present in the brain activity of the listener to denoise a multi-talker speech. The U-BESD is built upon the BESD with the addition of skip connections and dilated convolutions. Compared to previously proposed approaches, BESD and U-BESD are trained in a single neural architecture, lowering the complexity of the algorithm. We investigate two experimental settings. In the first one, the attended speaker is known, referred to as the speaker-specific setting, and in the second one no prior information is available about the attended speaker, referred to as the speaker-independent setting. In the speaker-specific setting, we show that both the BESD and U-BESD algorithms surpass a similar denoising autoencoder. Moreover, we also show that in the speaker-independent setting, U-BESD surpasses the performance of the only known approach that also uses the brain's activity.Le système auditif est extrêmement efficace pour extraire de l’information pertinente en présence d’un bruit de fond. Par contre, les algorithmes de rehaussement de la parole, visant à supprimer le bruit d’un signal de parole bruité, n’atteignent pas des résultats proches de l’efficacité du système auditif. Le but de cette étude est donc d’abord d’étudier comment le bruit affecte l’activité neuronale dans le système auditif, puis d’utiliser l’activité cérébrale en présence de bruit pour concevoir de meilleurs algorithmes de rehaussement. Afin d’étudier comment le bruit peut affecter l’activité des neurones, nous concevons d’abord un modèle linéaire généralisé qui relie l’activité des neurones aux covariables intrinsèques et extrinsèques qui peuvent affecter leur activité, comme le bruit. De ce modèle, nous extrayons deux métriques, l’une qui permet d’étudier les effets du bruit sur l’activité neuronale et l’autre les effets relatifs sur cette activité de la vocalisation par rapport au bruit. Nous utilisons ces métriques pour analyser l’activité neuronale d’une structure du système auditif, nomée le colliculus inférieur (IC), enregistrée lors de la présentation de vocalisations bruitées. Nous avons étudié l’effet de différents types de bruits, différentes vocalisations, différents niveaux sonores d’entrée et différents rapports signal sur bruit (SNR). Nous avons constaté que la présence de bruit non stationnaire augmente l’activité des neurones, quel que soit le SNR, le niveau d’entrée ou le type de vocalisation. La présence de bruits stationnaires blancs ou naturels provoque cependant une grande diversité de réponses où l’activité des sites d’enregistrement pouvait augmenter, diminuer ou rester inchangée. Cela montre que l’invariance du bruit précédemment signalée dans l’IC dépend des conditions de bruit, ce qui n’avait pas été observé auparavant. Nous abordons ensuite le problème du rehaussement de la parole en utilisant de l’information provenant du cerveau. Il a été démontré auparavant que les ondes cérébrales d’un auditeur sont fortement corrélées avec le locuteur auquel l’auditeur porte attention. Compte tenu de cette corrélation, nous concevons deux algorithmes de rehaussement de la parole, le Brain Enhanced Speech Denoiser (BESD) et le U-shaped Brain Enhanced Speech Denoiser (U-BESD), qui tirent parti de l’information présente dans l’activité cérébrale de l’auditeur pour débruiter un signal de parole multi-locuteurs. L’U-BESD est construit à partir du BESD avec l’ajout de sauts de connexions (skip connections) et de convolutions dilatées. De plus, BESD et U-BESD sont constitués respectivement d’un seul réseau qui nécessite un seul entraînement, ce qui réduit la complexité de l’algorithme en comparaison avec les approches existantes. Nous étudions deux conditions expérimentales. Dans la première, le locuteur auquel l’auditeur porte attention est connu, et dans la seconde, ce locuteur n’est pas connu. Dans le cadre du locuteur connu, nous montrons que les algorithmes BESD et U-BESD surpassent un autoencodeur similaire. De plus, nous montrons également que dans le cadre du locuteur inconnu, le U-BESD surpasse les performances de la seule approche existante connue qui utilise également l’activité cérébrale

    Electroacoustical simulation of listening room acoustics for project ARCHIMEDES

    Get PDF

    Modelling, Simulation and Data Analysis in Acoustical Problems

    Get PDF
    Modelling and simulation in acoustics is currently gaining importance. In fact, with the development and improvement of innovative computational techniques and with the growing need for predictive models, an impressive boost has been observed in several research and application areas, such as noise control, indoor acoustics, and industrial applications. This led us to the proposal of a special issue about “Modelling, Simulation and Data Analysis in Acoustical Problems”, as we believe in the importance of these topics in modern acoustics’ studies. In total, 81 papers were submitted and 33 of them were published, with an acceptance rate of 37.5%. According to the number of papers submitted, it can be affirmed that this is a trending topic in the scientific and academic community and this special issue will try to provide a future reference for the research that will be developed in coming years

    Preface

    Get PDF

    Medical-Data-Models.org:A collection of freely available forms (September 2016)

    Full text link
    MDM-Portal (Medical Data-Models) is a meta-data repository for creating, analysing, sharing and reusing medical forms, developed by the Institute of Medical Informatics, University of Muenster in Germany. Electronic forms for documentation of patient data are an integral part within the workflow of physicians. A huge amount of data is collected either through routine documentation forms (EHRs) for electronic health records or as case report forms (CRFs) for clinical trials. This raises major scientific challenges for health care, since different health information systems are not necessarily compatible with each other and thus information exchange of structured data is hampered. Software vendors provide a variety of individual documentation forms according to their standard contracts, which function as isolated applications. Furthermore, free availability of those forms is rarely the case. Currently less than 5 % of medical forms are freely accessible. Based on this lack of transparency harmonization of data models in health care is extremely cumbersome, thus work and know-how of completed clinical trials and routine documentation in hospitals are hard to be re-used. The MDM-Portal serves as an infrastructure for academic (non-commercial) medical research to contribute a solution to this problem. It already contains more than 4,000 system-independent forms (CDISC ODM Format, www.cdisc.org, Operational Data Model) with more than 380,000 dataelements. This enables researchers to view, discuss, download and export forms in most common technical formats such as PDF, CSV, Excel, SQL, SPSS, R, etc. A growing user community will lead to a growing database of medical forms. In this matter, we would like to encourage all medical researchers to register and add forms and discuss existing forms
    corecore