22 research outputs found

    Analysis of very low quality speech for mask-based enhancement

    Get PDF
    The complexity of the speech enhancement problem has motivated many different solutions. However, most techniques address situations in which the target speech is fully intelligible and the background noise energy is low in comparison with that of the speech. Thus while current enhancement algorithms can improve the perceived quality, the intelligibility of the speech is not increased significantly and may even be reduced. Recent research shows that intelligibility of very noisy speech can be improved by the use of a binary mask, in which a binary weight is applied to each time-frequency bin of the input spectrogram. There are several alternative goals for the binary mask estimator, based either on the Signal-to-Noise Ratio (SNR) of each time-frequency bin or on the speech signal characteristics alone. Our approach to the binary mask estimation problem aims to preserve the important speech cues independently of the noise present by identifying time-frequency regions that contain significant speech energy. The speech power spectrum varies greatly for different types of speech sound. The energy of voiced speech sounds is concentrated in the harmonics of the fundamental frequency while that of unvoiced sounds is, in contrast, distributed across a broad range of frequencies. To identify the presence of speech energy in a noisy speech signal we have therefore developed two detection algorithms. The first is a robust algorithm that identifies voiced speech segments and estimates their fundamental frequency. The second detects the presence of sibilants and estimates their energy distribution. In addition, we have developed a robust algorithm to estimate the active level of the speech. The outputs of these algorithms are combined with other features estimated from the noisy speech to form the input to a classifier which estimates a mask that accurately reflects the time-frequency distribution of speech energy even at low SNR levels. We evaluate a mask-based speech enhancer on a range of speech and noise signals and demonstrate a consistent increase in an objective intelligibility measure with respect to noisy speech.Open Acces

    A Wavelet Transform Module for a Speech Recognition Virtual Machine

    Get PDF
    This work explores the trade-offs between time and frequency information during the feature extraction process of an automatic speech recognition (ASR) system using wavelet transform (WT) features instead of Mel-frequency cepstral coefficients (MFCCs) and the benefits of combining the WTs and the MFCCs as inputs to an ASR system. A virtual machine from the Speech Recognition Virtual Kitchen resource (www.speechkitchen.org) is used as the context for implementing a wavelet signal processing module in a speech recognition system. Contributions include a comparison of MFCCs and WT features on small and large vocabulary tasks, application of combined MFCC and WT features on a noisy environment task, and the implementation of an expanded signal processing module in an existing recognition system. The updated virtual machine, which allows straightforward comparisons of signal processing approaches, is available for research and education purposes

    Models and analysis of vocal emissions for biomedical applications

    Get PDF
    This book of Proceedings collects the papers presented at the 3rd International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications, MAVEBA 2003, held 10-12 December 2003, Firenze, Italy. The workshop is organised every two years, and aims to stimulate contacts between specialists active in research and industrial developments, in the area of voice analysis for biomedical applications. The scope of the Workshop includes all aspects of voice modelling and analysis, ranging from fundamental research to all kinds of biomedical applications and related established and advanced technologies

    Development of algorithms for smart hearing protection devices

    Get PDF
    In industrial environments, wearing hearing protection devices is required to protect the wearers from high noise levels and prevent hearing loss. In addition to their protection against excessive noise, hearing protectors block other types of signals, even if they are useful and convenient. Therefore, if people want to communicate and exchange information, they must remove their hearing protectors, which is not convenient, or even dangerous. To overcome the problems encountered with the traditional passive hearing protection devices, this thesis outlines the steps and the process followed for the development of signal processing algorithms for a hearing protector that allows protection against external noise and oral communication between wearers. This hearing protector is called the “smart hearing protection device”. The smart hearing protection device is a traditional hearing protector in which a miniature digital signal processor is embedded in order to process the incoming signals, in addition to a miniature microphone to pickup external signals and a miniature internal loudspeaker to transmit the processed signals to the protected ear. To enable oral communication without removing the smart hearing protectors, signal processing algorithms must be developed. Therefore, the objective of this thesis consists of developing a noise-robust voice activity detection algorithm and a noise reduction algorithm to improve the quality and intelligibility of the speech signal. The methodology followed for the development of the algorithms is divided into three steps: first, the speech detection and noise reduction algorithms must be developed, second, these algorithms need to be evaluated and validated in software, and third, they must be implemented in the digital signal processor to validate their feasibility for the intended application. During the development of the two algorithms, the following constraints must be taken into account: the hardware resources of the digital signal processor embedded in the hearing protector (memory, number of operations per second), and the real-time constraint since the algorithm processing time should not exceed a certain threshold not to generate a perceptible delay between the active and passive paths of the hearing protector or a delay between the lips movement and the speech perception. From a scientific perspective, the thesis determines the thresholds that the digital signal processor should not exceed to not generate a perceptible delay between the active and passive paths of the hearing protector. These thresholds were obtained from a subjective study, where it was found that this delay depends on different parameters: (a) the degree of attenuation of the hearing protector, (b) the duration of the signal, (c) the level of the background noise, and (d) the type of the background noise. This study showed that when the fit of the hearing protector is shallow, 20 % of participants begin to perceive a delay after 8 ms for a bell sound (transient), 16 ms for a clean speech signal and 22 ms for a speech signal corrupted by babble noise. On the other hand, when having a deep hearing rotection fit, it was found that the delay between the two paths is 18 ms for the bell signal, 26 ms for the speech signal without noise and no delay when speech is corrupted by babble noise, showing that a better attenuation allows more time for digital signal processing. Second, this work presents a new voice activity detection algorithm in which a low complexity speech characteristic has been extracted. This characteristic was calculated as the ratio between the signal’s energy in the frequency region that contains the first formant to characterize the speech signal, and the low or high frequencies to characterize the noise signals. The evaluation of this algorithm and its comparison to another benchmark algorithm has demonstrated its selectivity with a false positive rate averaged over three signal to noise ratios (SNR) (10, 5 and 0 dB) of 4.2 % and a true positive rate of 91.4 % compared to 29.9 % false positives and 79.0 % of true positives for the benchmark algorithm. Third, this work shows that the extraction of the temporal envelope of a signal to generate a nonlinear and adaptive gain function enables the reduction of the background noise, the improvement of the quality of the speech signal and the generation of the least musical noise compared to three other benchmark algorithms. The development of speech detection and noise reduction algorithms, their objective and subjective evaluations in different noise environments, and their implementations in digital signal processors enabled the validation of their efficiency and low complexity for the the smart hearing protection application

    Psychoacoustics Modelling and the Recognition of Silence in Recorded Speech

    Get PDF
    Ph. D. Thesis.Over many years, a variety of different computer models purposed to encapsulate the essential differences between silence and speech have been investigated; but that notwithstanding, research into a different audio model may provide fresh insight. So, inspired by the unsurpassed human capability to differentiate between silence and speech under virtually any conditions, a dynamic psychoacoustics model, with a temporal resolution of an order of magnitude greater than that of the typical Mel Frequency Cepstral Coefficients model, and which implemented simultaneous masking around the most powerful harmonic in each of 24 Bark frequency bands, was evaluated within a two stage binary speech/silence non-linear classification system. The first classification stage (deterministic) was purposed to provide training data for the second stage (heuristic) — which was implemented using a Deep Neural Network (DNN). It is authoritatively asserted in the Literature — in a context of speech processing and DNNs — that performance improvements experienced with a ‘standard’ speech corpus do not always generalise. Accordingly, six new test-cases were recorded; and as this corpus implicitly included frequency normalisation it was feasible to assess whether the solution generalised, and it was found that all of the test-cases could be successfully processed by any of the six trained DNNs. In other tests, the performance of the two stage silence/speech classifier was found to exceed that of the silence/speech classifiers discussed in the Literature Review; but it was interesting to note that the Split Sample Technique for neural net training did not always identify the optimal trained network — and to correct this, an additional step in the training process was devised and tested. Overall, the results conclusively demonstrate that the combination of the dynamic psychoacoustics model with the two stage binary speech/silence non-linear classification system provides a viable alternative to existing methods of detecting silence in speech

    Acoustical measurements on stages of nine U.S. concert halls

    Get PDF
    corecore