5,574 research outputs found
Improving speech intelligibility in hearing aids. Part I: Signal processing algorithms
[EN] The improvement of speech intelligibility in hearing aids is a traditional problem that still remains open and unsolved. Modern devices may include signal processing algorithms
to improve intelligibility: automatic gain control, automatic environmental classification or speech enhancement. However, the design of such algorithms is strongly restricted by some engineering constraints caused by the reduced dimensions of hearing aid devices. In this paper, we discuss the application of state-of-theart signal processing algorithms to improve speech intelligibility in digital hearing aids, with particular emphasis on speech enhancement algorithms. Different alternatives for both monaural and binaural speech enhancement have been considered, arguing whether they are
suitable to be implemented in a commercial hearing aid or not.This work has been funded by the Spanish Ministry of Science and Innovation, under project TEC2012-38142-C04-02.Ayllón, D.; Gil Pita, R.; Rosa Zurera, M.; Padilla, L.; Piñero Sipán, MG.; Diego Antón, MD.; Ferrer Contreras, M.... (2014). Improving speech intelligibility in hearing aids. Part I: Signal processing algorithms. Waves. 6:61-71. http://hdl.handle.net/10251/57901S6171
Exploiting Hidden Representations from a DNN-based Speech Recogniser for Speech Intelligibility Prediction in Hearing-impaired Listeners
An accurate objective speech intelligibility prediction algorithms is of
great interest for many applications such as speech enhancement for hearing
aids. Most algorithms measures the signal-to-noise ratios or correlations
between the acoustic features of clean reference signals and degraded signals.
However, these hand-picked acoustic features are usually not explicitly
correlated with recognition. Meanwhile, deep neural network (DNN) based
automatic speech recogniser (ASR) is approaching human performance in some
speech recognition tasks. This work leverages the hidden representations from
DNN-based ASR as features for speech intelligibility prediction in
hearing-impaired listeners. The experiments based on a hearing aid
intelligibility database show that the proposed method could make better
prediction than a widely used short-time objective intelligibility (STOI) based
binaural measure.Comment: Submitted to INTERSPEECH202
Deep Learning-based Speech Enhancement for Real-life Applications
Speech enhancement is the process of improving speech quality and intelligibility by suppressing noise. Inspired by the outstanding performance of the deep learning approach for speech enhancement, this thesis aims to add to this research area through the following contributions. The thesis presents an experimental analysis of different deep neural networks for speech enhancement, to compare their performance and investigate factors and approaches that improve the performance. The outcomes of this analysis facilitate the development of better speech enhancement networks in this work.
Moreover, this thesis proposes a new deep convolutional denoising autoencoderbased speech enhancement architecture, in which strided and dilated convolutions were applied to improve the performance while keeping network complexity to a minimum. Furthermore, a two-stage speech enhancement approach is proposed that reduces distortion, by performing a speech denoising first stage in the frequency domain, followed by a second speech reconstruction stage in the time domain. This approach was proven to reduce speech distortion, leading to better overall quality of the processed speech in comparison to state-of-the-art speech enhancement models.
Finally, the work presents two deep neural network speech enhancement architectures for hearing aids and automatic speech recognition, as two real-world speech enhancement applications. A smart speech enhancement architecture was proposed for hearing aids, which is an integrated hearing aid and alert system. This architecture enhances both speech and important emergency noise, and only eliminates undesired noise. The results show that this idea is applicable to improve the performance of hearing aids. On the other hand, the architecture proposed for automatic speech recognition solves the mismatch issue between speech enhancement automatic speech recognition systems, leading to significant reduction in the word error rate of a baseline automatic speech recognition system, provided by Intelligent Voice for research purposes. In conclusion, the results presented in this thesis show promising performance for the proposed architectures for real time speech enhancement applications
Incorporating Ultrasound Tongue Images for Audio-Visual Speech Enhancement through Knowledge Distillation
Audio-visual speech enhancement (AV-SE) aims to enhance degraded speech along
with extra visual information such as lip videos, and has been shown to be more
effective than audio-only speech enhancement. This paper proposes further
incorporating ultrasound tongue images to improve lip-based AV-SE systems'
performance. Knowledge distillation is employed at the training stage to
address the challenge of acquiring ultrasound tongue images during inference,
enabling an audio-lip speech enhancement student model to learn from a
pre-trained audio-lip-tongue speech enhancement teacher model. Experimental
results demonstrate significant improvements in the quality and intelligibility
of the speech enhanced by the proposed method compared to the traditional
audio-lip speech enhancement baselines. Further analysis using phone error
rates (PER) of automatic speech recognition (ASR) shows that palatal and velar
consonants benefit most from the introduction of ultrasound tongue images.Comment: To be published in InterSpeech 202
The listening talker: A review of human and algorithmic context-induced modifications of speech
International audienceSpeech output technology is finding widespread application, including in scenarios where intelligibility might be compromised - at least for some listeners - by adverse conditions. Unlike most current algorithms, talkers continually adapt their speech patterns as a response to the immediate context of spoken communication, where the type of interlocutor and the environment are the dominant situational factors influencing speech production. Observations of talker behaviour can motivate the design of more robust speech output algorithms. Starting with a listener-oriented categorisation of possible goals for speech modification, this review article summarises the extensive set of behavioural findings related to human speech modification, identifies which factors appear to be beneficial, and goes on to examine previous computational attempts to improve intelligibility in noise. The review concludes by tabulating 46 speech modifications, many of which have yet to be perceptually or algorithmically evaluated. Consequently, the review provides a roadmap for future work in improving the robustness of speech output
Effects of Lombard Reflex on the Performance of Deep-Learning-Based Audio-Visual Speech Enhancement Systems
Humans tend to change their way of speaking when they are immersed in a noisy
environment, a reflex known as Lombard effect. Current speech enhancement
systems based on deep learning do not usually take into account this change in
the speaking style, because they are trained with neutral (non-Lombard) speech
utterances recorded under quiet conditions to which noise is artificially
added. In this paper, we investigate the effects that the Lombard reflex has on
the performance of audio-visual speech enhancement systems based on deep
learning. The results show that a gap in the performance of as much as
approximately 5 dB between the systems trained on neutral speech and the ones
trained on Lombard speech exists. This indicates the benefit of taking into
account the mismatch between neutral and Lombard speech in the design of
audio-visual speech enhancement systems
SE-Bridge: Speech Enhancement with Consistent Brownian Bridge
We propose SE-Bridge, a novel method for speech enhancement (SE). After
recently applying the diffusion models to speech enhancement, we can achieve
speech enhancement by solving a stochastic differential equation (SDE). Each
SDE corresponds to a probabilistic flow ordinary differential equation
(PF-ODE), and the trajectory of the PF-ODE solution consists of the speech
states at different moments. Our approach is based on consistency model that
ensure any speech states on the same PF-ODE trajectory, correspond to the same
initial state. By integrating the Brownian Bridge process, the model is able to
generate high-intelligibility speech samples without adversarial training. This
is the first attempt that applies the consistency models to SE task, achieving
state-of-the-art results in several metrics while saving 15 x the time required
for sampling compared to the diffusion-based baseline. Our experiments on
multiple datasets demonstrate the effectiveness of SE-Bridge in SE.
Furthermore, we show through extensive experiments on downstream tasks,
including Automatic Speech Recognition (ASR) and Speaker Verification (SV),
that SE-Bridge can effectively support multiple downstream tasks
- …