73,454 research outputs found
Detection of speech signal in strong ship-radiated noise based on spectrum entropy
Comparing the frequency spectrum distributions calculated from several successive frames, the change of the frequency spectrum of speech frames between successive frames is larger than that of the ship-radiated noise. The aim of this work is to propose a novel speech detection algorithm in strong ship-radiated noise. As inaccurate sentence boundaries are a major cause in automatic speech recognition in strong noise background. Hence, based on that characteristic, a new feature repeating pattern of frequency spectrum trend (RPFST) was calculated based on spectrum entropy. Firstly, the speech is detected roughly with the precision of 1Â s by calculating the feature RPFST. Then, the detection precision is up to 20Â ms, the length of frames, by method of frame shifting. Finally, benchmarked on a large measured data set, the detection accuracy (92Â %) is achieved. The experimental results show the feasibility of the algorithm to all kinds of speech and ship-radiated noise
A Robust Method for Speech Emotion Recognition Based on Infinite Student’s t
Speech emotion classification method, proposed in this paper, is based on Student’s t-mixture model with infinite component number (iSMM) and can directly conduct effective recognition for various kinds of speech emotion samples. Compared with the traditional GMM (Gaussian mixture model), speech emotion model based on Student’s t-mixture can effectively handle speech sample outliers that exist in the emotion feature space. Moreover, t-mixture model could keep robust to atypical emotion test data. In allusion to the high data complexity caused by high-dimensional space and the problem of insufficient training samples, a global latent space is joined to emotion model. Such an approach makes the number of components divided infinite and forms an iSMM emotion model, which can automatically determine the best number of components with lower complexity to complete various kinds of emotion characteristics data classification. Conducted over one spontaneous (FAU Aibo Emotion Corpus) and two acting (DES and EMO-DB) universal speech emotion databases which have high-dimensional feature samples and diversiform data distributions, the iSMM maintains better recognition performance than the comparisons. Thus, the effectiveness and generalization to the high-dimensional data and the outliers are verified. Hereby, the iSMM emotion model is verified as a robust method with the validity and generalization to outliers and high-dimensional emotion characters
Detection of speech signal in strong ship-radiated noise based on spectrum entropy
Comparing the frequency spectrum distributions calculated from several successive frames, the change of the frequency spectrum of speech frames between successive frames is larger than that of the ship-radiated noise. The aim of this work is to propose a novel speech detection algorithm in strong ship-radiated noise. As inaccurate sentence boundaries are a major cause in automatic speech recognition in strong noise background. Hence, based on that characteristic, a new feature repeating pattern of frequency spectrum trend (RPFST) was calculated based on spectrum entropy. Firstly, the speech is detected roughly with the precision of 1Â s by calculating the feature RPFST. Then, the detection precision is up to 20Â ms, the length of frames, by method of frame shifting. Finally, benchmarked on a large measured data set, the detection accuracy (92Â %) is achieved. The experimental results show the feasibility of the algorithm to all kinds of speech and ship-radiated noise
Statistical models for noise-robust speech recognition
A standard way of improving the robustness of speech recognition systems to noise is model compensation. This replaces a speech recogniser's distributions over clean speech by ones over noise-corrupted speech. For each clean speech component, model compensation techniques usually approximate the corrupted speech distribution with a diagonal-covariance Gaussian distribution. This thesis looks into improving on this approximation in two ways: firstly, by estimating full-covariance Gaussian distributions; secondly, by approximating corrupted-speech likelihoods without any parameterised distribution.
The first part of this work is about compensating for within-component feature correlations under noise. For this, the covariance matrices of the computed Gaussians should be full instead of diagonal. The estimation of off-diagonal covariance elements turns out to be sensitive to approximations. A popular approximation is the one that state-of-the-art compensation schemes, like VTS compensation, use for dynamic coefficients: the continuous-time approximation. Standard speech recognisers contain both per-time slice, static, coefficients, and dynamic coefficients, which represent signal changes over time, and are normally computed from a window of static coefficients. To remove the need for the continuous-time approximation, this thesis introduces a new technique. It first compensates a distribution over the window of statics, and then applies the same linear projection that extracts dynamic coefficients. It introduces a number of methods that address the correlation changes that occur in noise within this framework. The next problem is decoding speed with full covariances. This thesis re-analyses the previously-introduced predictive linear transformations, and shows how they can model feature correlations at low and tunable computational cost.
The second part of this work removes the Gaussian assumption completely. It introduces a sampling method that, given speech and noise distributions and a mismatch function, in the limit calculates the corrupted speech likelihood exactly. For this, it transforms the integral in the likelihood expression, and then applies sequential importance resampling. Though it is too slow to use for recognition, it enables a more fine-grained assessment of compensation techniques, based on the KL divergence to the ideal compensation for one component. The KL divergence proves to predict the word error rate well. This technique also makes it possible to evaluate the impact of approximations that standard compensation schemes make.This work was supported by Toshiba Research Europe Ltd., Cambridge Research Laboratory
Bernoulli HMMs for Handwritten Text Recognition
In last years Hidden Markov Models (HMMs) have received significant attention in the
task off-line handwritten text recognition (HTR). As in automatic speech recognition (ASR),
HMMs are used to model the probability of an observation sequence, given its corresponding
text transcription. However, in contrast to what happens in ASR, in HTR there is no standard
set of local features being used by most of the proposed systems. In this thesis we propose the
use of raw binary pixels as features, in conjunction with models that deal more directly with
the binary data. In particular, we propose the use of Bernoulli HMMs (BHMMs), that is, conventional
HMMs in which Gaussian (mixture) distributions have been replaced by Bernoulli
(mixture) probability functions. The objective is twofold: on the one hand, this allows us
to better modeling the binary nature of text images (foreground/background) using BHMMs.
On the other hand, this guarantees that no discriminative information is filtered out during
feature extraction (most HTR available datasets can be easily binarized without a relevant
loss of information).
In this thesis, all the HMM theory required to develop a HMM based HTR toolkit is
reviewed and adapted to the case of BHMMs. Specifically, we begin by defining a simple
classifier based on BHMMs with Bernoulli probability functions at the states, and we end
with an embedded Bernoulli mixture HMM recognizer for continuous HTR. Regarding the
binary features, we propose a simple binary feature extraction process without significant
loss of information. All input images are scaled and binarized, in order to easily reinterpret
them as sequences of binary feature vectors. Two extensions are proposed to this basic feature
extraction method: the use of a sliding window in order to better capture the context,
and a repositioning method in order to better deal with vertical distortions. Competitive results
were obtained when BHMMs and proposed methods were applied to well-known HTR
databases. In particular, we ranked first at the Arabic Handwriting Recognition Competition
organized during the 12th International Conference on Frontiers in Handwriting Recognition
(ICFHR 2010), and at the Arabic Recognition Competition: Multi-font Multi-size Digitally
Represented Text organized during the 11th International Conference on Document Analysis
and Recognition (ICDAR 2011).
In the last part of this thesis we propose a method for training BHMM classifiers using In last years Hidden Markov Models (HMMs) have received significant attention in the
task off-line handwritten text recognition (HTR). As in automatic speech recognition (ASR),
HMMs are used to model the probability of an observation sequence, given its corresponding
text transcription. However, in contrast to what happens in ASR, in HTR there is no standard
set of local features being used by most of the proposed systems. In this thesis we propose the
use of raw binary pixels as features, in conjunction with models that deal more directly with
the binary data. In particular, we propose the use of Bernoulli HMMs (BHMMs), that is, conventional
HMMs in which Gaussian (mixture) distributions have been replaced by Bernoulli
(mixture) probability functions. The objective is twofold: on the one hand, this allows us
to better modeling the binary nature of text images (foreground/background) using BHMMs.
On the other hand, this guarantees that no discriminative information is filtered out during
feature extraction (most HTR available datasets can be easily binarized without a relevant
loss of information).
In this thesis, all the HMM theory required to develop a HMM based HTR toolkit is
reviewed and adapted to the case of BHMMs. Specifically, we begin by defining a simple
classifier based on BHMMs with Bernoulli probability functions at the states, and we end
with an embedded Bernoulli mixture HMM recognizer for continuous HTR. Regarding the
binary features, we propose a simple binary feature extraction process without significant
loss of information. All input images are scaled and binarized, in order to easily reinterpret
them as sequences of binary feature vectors. Two extensions are proposed to this basic feature
extraction method: the use of a sliding window in order to better capture the context,
and a repositioning method in order to better deal with vertical distortions. Competitive results
were obtained when BHMMs and proposed methods were applied to well-known HTR
databases. In particular, we ranked first at the Arabic Handwriting Recognition Competition
organized during the 12th International Conference on Frontiers in Handwriting Recognition
(ICFHR 2010), and at the Arabic Recognition Competition: Multi-font Multi-size Digitally
Represented Text organized during the 11th International Conference on Document Analysis
and Recognition (ICDAR 2011).
In the last part of this thesis we propose a method for training BHMM classifiers using In last years Hidden Markov Models (HMMs) have received significant attention in the
task off-line handwritten text recognition (HTR). As in automatic speech recognition (ASR),
HMMs are used to model the probability of an observation sequence, given its corresponding
text transcription. However, in contrast to what happens in ASR, in HTR there is no standard
set of local features being used by most of the proposed systems. In this thesis we propose the
use of raw binary pixels as features, in conjunction with models that deal more directly with
the binary data. In particular, we propose the use of Bernoulli HMMs (BHMMs), that is, conventional
HMMs in which Gaussian (mixture) distributions have been replaced by Bernoulli
(mixture) probability functions. The objective is twofold: on the one hand, this allows us
to better modeling the binary nature of text images (foreground/background) using BHMMs.
On the other hand, this guarantees that no discriminative information is filtered out during
feature extraction (most HTR available datasets can be easily binarized without a relevant
loss of information).
In this thesis, all the HMM theory required to develop a HMM based HTR toolkit is
reviewed and adapted to the case of BHMMs. Specifically, we begin by defining a simple
classifier based on BHMMs with Bernoulli probability functions at the states, and we end
with an embedded Bernoulli mixture HMM recognizer for continuous HTR. Regarding the
binary features, we propose a simple binary feature extraction process without significant
loss of information. All input images are scaled and binarized, in order to easily reinterpret
them as sequences of binary feature vectors. Two extensions are proposed to this basic feature
extraction method: the use of a sliding window in order to better capture the context,
and a repositioning method in order to better deal with vertical distortions. Competitive results
were obtained when BHMMs and proposed methods were applied to well-known HTR
databases. In particular, we ranked first at the Arabic Handwriting Recognition Competition
organized during the 12th International Conference on Frontiers in Handwriting Recognition
(ICFHR 2010), and at the Arabic Recognition Competition: Multi-font Multi-size Digitally
Represented Text organized during the 11th International Conference on Document Analysis
and Recognition (ICDAR 2011).
In the last part of this thesis we propose a method for training BHMM classifiers using discriminative training criteria, instead of the conventionalMaximum Likelihood Estimation
(MLE). Specifically, we propose a log-linear classifier for binary data based on the BHMM
classifier. Parameter estimation of this model can be carried out using discriminative training
criteria for log-linear models. In particular, we show the formulae for several MMI based
criteria. Finally, we prove the equivalence between both classifiers, hence, discriminative
training of a BHMM classifier can be carried out by obtaining its equivalent log-linear classifier.
Reported results show that discriminative BHMMs clearly outperform conventional
generative BHMMs.Giménez Pastor, A. (2014). Bernoulli HMMs for Handwritten Text Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37978TESI
- …