116 research outputs found

    User-Symbiotic Speech Enhancement for Hearing Aids

    Get PDF

    Sensory Communication

    Get PDF
    Contains table of contents on Section 2, an introduction, reports on eleven research projects and a list of publications.National Institutes of Health Grant 5 R01 DC00117National Institutes of Health Grant 5 R01 DC00270National Institutes of Health Contract 2 P01 DC00361National Institutes of Health Grant 5 R01 DC00100National Institutes of Health Contract 7 R29 DC00428National Institutes of Health Grant 2 R01 DC00126U.S. Air Force - Office of Scientific Research Grant AFOSR 90-0200U.S. Navy - Office of Naval Research Grant N00014-90-J-1935National Institutes of Health Grant 5 R29 DC00625U.S. Navy - Office of Naval Research Grant N00014-91-J-1454U.S. Navy - Office of Naval Research Grant N00014-92-J-181

    Proceedings of the EAA Spatial Audio Signal Processing symposium: SASP 2019

    Get PDF
    International audienc

    Effects of errorless learning on the acquisition of velopharyngeal movement control

    Get PDF
    Session 1pSC - Speech Communication: Cross-Linguistic Studies of Speech Sound Learning of the Languages of Hong Kong (Poster Session)The implicit motor learning literature suggests a benefit for learning if errors are minimized during practice. This study investigated whether the same principle holds for learning velopharyngeal movement control. Normal speaking participants learned to produce hypernasal speech in either an errorless learning condition (in which the possibility for errors was limited) or an errorful learning condition (in which the possibility for errors was not limited). Nasality level of the participantsā€™ speech was measured by nasometer and reflected by nasalance scores (in %). Errorless learners practiced producing hypernasal speech with a threshold nasalance score of 10% at the beginning, which gradually increased to a threshold of 50% at the end. The same set of threshold targets were presented to errorful learners but in a reversed order. Errors were defined by the proportion of speech with a nasalance score below the threshold. The results showed that, relative to errorful learners, errorless learners displayed fewer errors (50.7% vs. 17.7%) and a higher mean nasalance score (31.3% vs. 46.7%) during the acquisition phase. Furthermore, errorless learners outperformed errorful learners in both retention and novel transfer tests. Acknowledgment: Supported by The University of Hong Kong Strategic Research Theme for Sciences of Learning Ā© 2012 Acoustical Society of Americapublished_or_final_versio

    Sensory Communication

    Get PDF
    Contains table of contents for Section 2 and reports on five research projects.National Institutes of Health Contract 2 R01 DC00117National Institutes of Health Contract 1 R01 DC02032National Institutes of Health Contract 2 P01 DC00361National Institutes of Health Contract N01 DC22402National Institutes of Health Grant R01-DC001001National Institutes of Health Grant R01-DC00270National Institutes of Health Grant 5 R01 DC00126National Institutes of Health Grant R29-DC00625U.S. Navy - Office of Naval Research Grant N00014-88-K-0604U.S. Navy - Office of Naval Research Grant N00014-91-J-1454U.S. Navy - Office of Naval Research Grant N00014-92-J-1814U.S. Navy - Naval Air Warfare Center Training Systems Division Contract N61339-94-C-0087U.S. Navy - Naval Air Warfare Center Training System Division Contract N61339-93-C-0055U.S. Navy - Office of Naval Research Grant N00014-93-1-1198National Aeronautics and Space Administration/Ames Research Center Grant NCC 2-77

    Data-driven Speech Intelligibility Enhancement and Prediction for Hearing Aids

    Get PDF
    Hearing impairment is a widespread problem around the world. It is estimated that one in six people are living with some degree of hearing loss. Moderate and severe hearing impairment has been recognised as one of the major causes of disability, which is associated with declines in the quality of life, mental illness and dementia. However, investigation shows that only 10-20\% of older people with significant hearing impairment wear hearing aids. One of the main factors causing the low uptake is that current devices struggle to help hearing aid users understand speech in noisy environments. For the purpose of compensating for the elevated hearing thresholds and dysfunction of source separation processing caused by the impaired auditory system, amplification and denoising have been the major focuses of current hearing aid studies to improve the intelligibility of speech in noise. Also, it is important to derive a metric that can fairly predict speech intelligibility for the better development of hearing aid techniques. This thesis aims to enhance the speech intelligibility of hearing impaired listeners. Motivated by the success of data-driven approaches in many speech processing applications, this work proposes the differentiable hearing aid speech processing (DHASP) framework to optimise both the amplification and denoising modules within a hearing aid processor. This is accomplished by setting an intelligibility-based optimisation objective and taking advantage of large-scale speech databases to train the hearing aid processor to maximise the intelligibility for the listeners. The first set of experiments is conducted on both clean and noisy speech databases, and the results from objective evaluation suggest that the amplification fittings optimised within the DHASP framework can outperform a widely used and well-recognised fitting. The second set of experiments is conducted on a large-scale database with simulated domestic noisy scenes. The results from both objective and subjective evaluations show that the DHASP-optimised hearing aid processor incorporating a deep neural network-based denoising module can achieve competitive performance in terms of intelligibility enhancement. A precise intelligibility predictor can provide reliable evaluation results to save the cost of expensive and time-consuming subjective evaluation. Inspired by the findings that automatic speech recognition (ASR) models show similar recognition results as humans in some experiments, this work exploits ASR models for intelligibility prediction. An intrusive approach using ASR hidden representations and a non-intrusive approach using ASR uncertainty are proposed and explained in the third and fourth experimental chapters. Experiments are conducted on two databases, one with monaural speech in speech-spectrum-shaped noise with normal hearing listeners, and the other one with processed binaural speech in domestic noise with hearing impaired listeners. Results suggest that both the intrusive and non-intrusive approaches can achieve top performances and outperform a number of widely used intelligibility prediction approaches. In conclusion, this thesis covers both the enhancement and prediction of speech intelligibility for hearing aids. The proposed hearing aid processor optimised within the proposed DHASP framework can significantly improve the intelligibility of speech in noise for hearing impaired listeners. Also, it is shown that the proposed ASR-based intelligibility prediction approaches can achieve state-of-the-art performances against a number of widely used intelligibility predictors

    The role of spatial cues for processing speech in noise

    Get PDF
    How can we understand speech in difficult listening conditions? This question, centered on the ā€˜cocktail party problemā€™, has been studied for decades with psychophysical, physiological and modelling studies, but the answer remains elusive. In the cochlea, sounds are processed through a filter bank which separates them in frequency bands that are then sensed through different sensory neurons. All the sounds coming from a single source must be combined together again in the brain to create a unified speech percept. One of the strategies to achieve this grouping is to use common sound source location. The location of sound sources in the frequency range of human speech in the azimuthal plane is mainly perceived through interaural time differences (ITDs). We studied the mechanisms of ITD processing by comparing vowel discrimination performance in noise with coherent or incoherent ITDs across auditory filters. We showed that coherent ITD cues within one auditory filter were necessary for human subjects to take advantage of spatial unmasking, but that one sound source could have different ITDs across auditory filters. We showed that these psychophysical results are best represented in the gerbil inferior colliculus when using large neuronal populations optimized for natural spatial unmasking to discriminate the vowels in all the spatial conditions. Our results establish a parallel between human behavior and neuronal computations in the IC, highlighting the potential importance of the IC for discriminating sounds in complex spatial environments
    • ā€¦
    corecore