497 research outputs found
A Study into Speech Enhancement Techniques in Adverse Environment
This dissertation developed speech enhancement techniques that improve the speech quality in applications such as mobile communications, teleconferencing and smart loudspeakers. For these applications it is necessary to suppress noise and reverberation. Thus the contribution in this dissertation is twofold: single channel speech enhancement system which exploits the temporal and spectral diversity of the received microphone signal for noise suppression and multi-channel speech enhancement method with the ability to employ spatial diversity to reduce reverberation
A Multi-Channel Noise Estimator Based on Improved Minima Controlled Recursive Averaging for Speech Enhancement
This article introduces an extension of the improved minima-controlled recursive averaging noise estimation from single to multi-channel speech enhancement systems. With the spatial information of microphone array signals being fully exploited, more accurate estimate of the noise spectrum can be obtained over the single-channel counterpart. Computer simulation demonstrates superior performance of the proposed noise estimator in terms of noise tracking performance and noise estimation error. Furthermore, the use of the proposed technique with the multi-channel Wiener filter yields improved signal-to-noise ratio and speech distortion
Speech enhancement in binaural hearing protection devices
The capability of people to operate safely and effective under extreme noise conditions is dependent on their accesses to adequate voice communication while using hearing protection. This thesis develops speech enhancement algorithms that can be implemented in binaural hearing protection devices to improve communication and situation awareness in the workplace. The developed algorithms which emphasize low computational complexity, come with the capability to suppress noise while enhancing speech
DNN-based mask estimation for distributed speech enhancement in spatially unconstrained microphone arrays
Deep neural network (DNN)-based speech enhancement algorithms in microphone
arrays have now proven to be efficient solutions to speech understanding and
speech recognition in noisy environments. However, in the context of ad-hoc
microphone arrays, many challenges remain and raise the need for distributed
processing. In this paper, we propose to extend a previously introduced
distributed DNN-based time-frequency mask estimation scheme that can
efficiently use spatial information in form of so-called compressed signals
which are pre-filtered target estimations. We study the performance of this
algorithm under realistic acoustic conditions and investigate practical aspects
of its optimal application. We show that the nodes in the microphone array
cooperate by taking profit of their spatial coverage in the room. We also
propose to use the compressed signals not only to convey the target estimation
but also the noise estimation in order to exploit the acoustic diversity
recorded throughout the microphone array.Comment: Submitted to TASL
Objective Assessment of Machine Learning Algorithms for Speech Enhancement in Hearing Aids
Speech enhancement in assistive hearing devices has been an area of research for many decades. Noise reduction is particularly challenging because of the wide variety of noise sources and the non-stationarity of speech and noise. Digital signal processing (DSP) algorithms deployed in modern hearing aids for noise reduction rely on certain assumptions on the statistical properties of undesired signals. This could be disadvantageous in accurate estimation of different noise types, which subsequently leads to suboptimal noise reduction. In this research, a relatively unexplored technique based on deep learning, i.e. Recurrent Neural Network (RNN), is used to perform noise reduction and dereverberation for assisting hearing-impaired listeners. For noise reduction, the performance of the deep learning model was evaluated objectively and compared with that of open Master Hearing Aid (openMHA), a conventional signal processing based framework, and a Deep Neural Network (DNN) based model. It was found that the RNN model can suppress noise and improve speech understanding better than the conventional hearing aid noise reduction algorithm and the DNN model. The same RNN model was shown to reduce reverberation components with proper training. A real-time implementation of the deep learning model is also discussed
Towards An Intelligent Fuzzy Based Multimodal Two Stage Speech Enhancement System
This thesis presents a novel two stage multimodal speech enhancement system, making use of both visual and audio information to filter speech, and explores the extension of
this system with the use of fuzzy logic to demonstrate proof of concept for an envisaged autonomous, adaptive, and context aware multimodal system. The design of the proposed cognitively inspired framework is scalable, meaning that it is possible for the techniques used in individual parts of the system to be upgraded and there is scope for the initial framework presented here to be expanded.
In the proposed system, the concept of single modality two stage filtering is extended to include the visual modality. Noisy speech information received by a microphone array is first pre-processed by visually derived Wiener filtering employing the novel use of the Gaussian Mixture Regression (GMR) technique, making use of associated visual speech information, extracted using a state of the art Semi Adaptive Appearance Models (SAAM) based lip tracking approach. This pre-processed speech is then enhanced further by audio only beamforming using a state of the art Transfer Function Generalised Sidelobe Canceller (TFGSC) approach. This results in a system which is designed to function in challenging noisy speech environments (using speech sentences with different speakers from the GRID corpus and a range of noise recordings), and both objective and subjective test results (employing the widely used Perceptual Evaluation of Speech Quality (PESQ) measure, a composite objective measure, and subjective listening tests), showing that this initial system is capable of delivering very encouraging results with regard to filtering speech mixtures in difficult reverberant speech environments.
Some limitations of this initial framework are identified, and the extension of this multimodal system is explored, with the development of a fuzzy logic based framework and a proof of concept demonstration implemented. Results show that this proposed autonomous,adaptive, and context aware multimodal framework is capable of delivering very positive results in difficult noisy speech environments, with cognitively inspired use of audio and visual information, depending on environmental conditions. Finally some concluding remarks
are made along with proposals for future work
- …