2,417 research outputs found

    Speech enhancement Algorithm based on super-Gaussian modeling and orthogonal polynomials

    Get PDF
    © 2020 Lippincott Williams and Wilkins. All rights reserved. Different types of noise from the surrounding always interfere with speech and produce annoying signals for the human auditory system. To exchange speech information in a noisy environment, speech quality and intelligibility must be maintained, which is a challenging task. In most speech enhancement algorithms, the speech signal is characterized by Gaussian or super-Gaussian models, and noise is characterized by a Gaussian prior. However, these assumptions do not always hold in real-life situations, thereby negatively affecting the estimation, and eventually, the performance of the enhancement algorithm. Accordingly, this paper focuses on deriving an optimum low-distortion estimator with models that fit well with speech and noise data signals. This estimator provides minimum levels of speech distortion and residual noise with additional improvements in speech perceptual aspects via four key steps. First, a recent transform based on an orthogonal polynomial is used to transform the observation signal into a transform domain. Second, the noise classification based on feature extraction is adopted to find accurate and mutable models for noise signals. Third, two stages of nonlinear and linear estimators based on the minimum mean square error (MMSE) and new models for speech and noise are derived to estimate a clean speech signal. Finally, the estimated speech signal in the time domain is determined by considering the inverse of the orthogonal transform. The results show that the average classification accuracy of the proposed approach is 99.43%. In addition, the proposed algorithm significantly outperforms existing speech estimators in terms of quality and intelligibility measures

    Speech Enhancement Algorithm Based on Super-Gaussian Modeling and Orthogonal Polynomials

    Get PDF
    Different types of noise from the surrounding always interfere with speech and produce annoying signals for the human auditory system. To exchange speech information in a noisy environment, speech quality and intelligibility must be maintained, which is a challenging task. In most speech enhancement algorithms, the speech signal is characterized by Gaussian or super-Gaussian models, and noise is characterized by a Gaussian prior. However, these assumptions do not always hold in real-life situations, thereby negatively affecting the estimation, and eventually, the performance of the enhancement algorithm. Accordingly, this paper focuses on deriving an optimum low-distortion estimator with models that fit well with speech and noise data signals. This estimator provides minimum levels of speech distortion and residual noise with additional improvements in speech perceptual aspects via four key steps. First, a recent transform based on an orthogonal polynomial is used to transform the observation signal into a transform domain. Second, noise classification based on feature extraction is adopted to find accurate and mutable models for noise signals. Third, two stages of nonlinear and linear estimators based on the minimum mean square error (MMSE) and new models for speech and noise are derived to estimate a clean speech signal. Finally, the estimated speech signal in the time domain is determined by considering the inverse of the orthogonal transform. The results show that the average classification accuracy of the proposed approach is 99.43%. In addition, the proposed algorithm significantly outperforms existing speech estimators in terms of quality and intelligibility measures

    Offline and real time noise reduction in speech signals using the discrete wavelet packet decomposition

    Get PDF
    This thesis describes the development of an offline and real time wavelet based speech enhancement system to process speech corrupted with various amounts of white Gaussian noise and other different noise types

    Improving the robustness of the usual fbe-based asr front-end

    Get PDF
    All speech recognition systems require some form of signal representation that parametrically models the temporal evolution of the spectral envelope. Current parameterizations involve, either explicitly or implicitly, a set of energies from frequency bands which are often distributed in a mel scale. The computation of those filterbank energies (FBE) always includes smoothing of basic spectral measurements and non-linear amplitude compression. A variety of linear transformations are typically applied to this time-frequency representation prior to the Hidden Markov Model (HMM) pattern-matching stage of recognition. In the paper, we will discuss some robustness issues involved in both the computation of the FBEs and the posterior linear transformations, presenting alternative techniques that can improve robustness in additive noise conditions. In particular, the root non-linearity, a voicing-dependent FBE computation technique and a time&frequency filtering (tiffing) technique will be considered. Recognition results for the Aurora database will be shown to illustrate the potential application of these alternatives techniques for enhancing the robustness of speech recognition systems.Peer ReviewedPostprint (published version
    • …
    corecore