5,386 research outputs found
Multi-candidate missing data imputation for robust speech recognition
The application of Missing Data Techniques (MDT) to increase the noise robustness of HMM/GMM-based large vocabulary speech recognizers is hampered by a large computational burden. The likelihood evaluations imply solving many constrained least squares (CLSQ) optimization problems. As an alternative, researchers have proposed frontend MDT or have made oversimplifying independence assumptions for the backend acoustic model. In this article, we propose a fast Multi-Candidate (MC) approach that solves the per-Gaussian CLSQ problems approximately by selecting the best from a small set of candidate solutions, which are generated as the MDT solutions on a reduced set of cluster Gaussians. Experiments show that the MC MDT runs equally fast as the uncompensated recognizer while achieving the accuracy of the full backend optimization approach. The experiments also show that exploiting the more accurate acoustic model of the backend does pay off in terms of accuracy when compared to frontend MDT. © 2012 Wang and Van hamme; licensee Springer.Wang Y., Van hamme H., ''Multi-candidate missing data imputation for robust speech recognition'', EURASIP journal on audio, speech, and music processing, vol. 17, 20 pp., 2012.status: publishe
Speech Recognition
Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes
{\sc CosmoNet}: fast cosmological parameter estimation in non-flat models using neural networks
We present a further development of a method for accelerating the calculation
of CMB power spectra, matter power spectra and likelihood functions for use in
cosmological Bayesian inference. The algorithm, called {\sc CosmoNet}, is based
on training a multilayer perceptron neural network. We compute CMB power
spectra (up to ) and matter transfer functions over a hypercube in
parameter space encompassing the confidence region of a selection of
CMB (WMAP + high resolution experiments) and large scale structure surveys (2dF
and SDSS). We work in the framework of a generic 7 parameter non-flat
cosmology. Additionally we use {\sc CosmoNet} to compute the WMAP 3-year, 2dF
and SDSS likelihoods over the same region. We find that the average error in
the power spectra is typically well below cosmic variance for spectra, and
experimental likelihoods calculated to within a fraction of a log unit. We
demonstrate that marginalised posteriors generated with {\sc CosmoNet} spectra
agree to within a few percent of those generated by {\sc CAMB} parallelised
over 4 CPUs, but are obtained 2-3 times faster on just a \emph{single}
processor. Furthermore posteriors generated directly via {\sc CosmoNet}
likelihoods can be obtained in less than 30 minutes on a single processor,
corresponding to a speed up of a factor of . We also demonstrate the
capabilities of {\sc CosmoNet} by extending the CMB power spectra and matter
transfer function training to a more generic 10 parameter cosmological model,
including tensor modes, a varying equation of state of dark energy and massive
neutrinos. {\sc CosmoNet} and interfaces to both {\sc CosmoMC} and {\sc
Bayesys} are publically available at {\tt
www.mrao.cam.ac.uk/software/cosmonet}.Comment: 8 pages, submitted to MNRA
- âŠ