2 research outputs found
Noise Variance Estimation In Signal Processing
We present a new method of estimating noise
variance. The method is applicable for 1D and 2D signal
processing. The essence of this method is estimation of the scatter
of normally distributed data with high level of outliers. The
method is applicable to data with the majority of the data points
having no signal present. The method is based on the shortest
half sample method. The mean of the shortest half sample
(shorth) and the location of the least median of squares are
among the most robust measures of the location of the mode. The
length of the shortest half sample has been used as the
measurement of the data scatter of uncontaminated data. We
show that computing the length of several sub samples of varying
sizes provides the necessary information to estimate both the
scatter and the number of uncontaminated data points in a
sample. We derive the system of equations to solve for the data
scatter and the number of uncontaminated data points for the
Gaussian distribution. The data scatter is the measure of the
noise variance. The method can be extended to other
distributions
A New Method for Speech Denoising and Robust Speech Recognition Using Probabilistic Models for Clean Speech and for Noise
We present a new method for speech denoising and robust speech recognition. Using the framework of probabilistic models allows us to integrate detailed speech models and models of realistic non-stationary noise signals in a principled manner. The framework transforms the denoising problem into a problem of Bayes-optimal signal estimation, producing minimum mean square error estimators of desired features of clean speech from noisy data. We describe a fast and efficient implementation of an algorithm that computes these estimators. The effectiveness of this algorithm is demonstrated in robust speech recognition experiments, using the Wall Street Journal speech corpus and Microsoft Whisper large-vocabulary continuous speech recognizer. Results show significantly lower word error rates than those under noisy-matched condition. In particular, when the denoising algorithm is applied to the noisy training data and subsequently the recognizer is retrained, very low error rates are obtained