548 research outputs found
Phoneme and sentence-level ensembles for speech recognition
We address the question of whether and how boosting and bagging can be used for speech recognition. In order to do this, we compare two different boosting schemes, one at the phoneme level and one at the utterance level, with a phoneme-level bagging scheme. We control for many parameters and other choices, such as the state inference scheme used. In an unbiased experiment, we clearly show that the gain of boosting methods compared to a single hidden Markov model is in all cases only marginal, while bagging significantly outperforms all other methods. We thus conclude that bagging methods, which have so far been overlooked in favour of boosting, should be examined more closely as a potentially useful ensemble learning technique for speech recognition
In All Likelihood, Deep Belief Is Not Enough
Statistical models of natural stimuli provide an important tool for
researchers in the fields of machine learning and computational neuroscience. A
canonical way to quantitatively assess and compare the performance of
statistical models is given by the likelihood. One class of statistical models
which has recently gained increasing popularity and has been applied to a
variety of complex data are deep belief networks. Analyses of these models,
however, have been typically limited to qualitative analyses based on samples
due to the computationally intractable nature of the model likelihood.
Motivated by these circumstances, the present article provides a consistent
estimator for the likelihood that is both computationally tractable and simple
to apply in practice. Using this estimator, a deep belief network which has
been suggested for the modeling of natural image patches is quantitatively
investigated and compared to other models of natural image patches. Contrary to
earlier claims based on qualitative results, the results presented in this
article provide evidence that the model under investigation is not a
particularly good model for natural image
Bayesian Nonparametric Density Autoregression with Lag Selection
We develop a Bayesian nonparametric autoregressive model applied to flexibly
estimate general transition densities exhibiting nonlinear lag dependence. Our
approach is related to Bayesian density regression using Dirichlet process
mixtures, with the Markovian likelihood defined through the conditional
distribution obtained from the mixture. This results in a Bayesian
nonparametric extension of a mixtures-of-experts model formulation. We address
computational challenges to posterior sampling that arise from the Markovian
structure in the likelihood. The base model is illustrated with synthetic data
from a classical model for population dynamics, as well as a series of waiting
times between eruptions of Old Faithful Geyser. We study inferences available
through the base model before extending the methodology to include automatic
relevance detection among a pre-specified set of lags. Inference for global and
local lag selection is explored with additional simulation studies, and the
methods are illustrated through analysis of an annual time series of pink
salmon abundance in a stream in Alaska. We further explore and compare
transition density estimation performance for alternative configurations of the
proposed model
- …