196 research outputs found
Stochastic Collapsed Variational Inference for Sequential Data
Stochastic variational inference for collapsed models has recently been
successfully applied to large scale topic modelling. In this paper, we propose
a stochastic collapsed variational inference algorithm in the sequential data
setting. Our algorithm is applicable to both finite hidden Markov models and
hierarchical Dirichlet process hidden Markov models, and to any datasets
generated by emission distributions in the exponential family. Our experiment
results on two discrete datasets show that our inference is both more efficient
and more accurate than its uncollapsed version, stochastic variational
inference.Comment: NIPS Workshop on Advances in Approximate Bayesian Inference, 201
A Nonparametric Bayesian Approach to Uncovering Rat Hippocampal Population Codes During Spatial Navigation
Rodent hippocampal population codes represent important spatial information
about the environment during navigation. Several computational methods have
been developed to uncover the neural representation of spatial topology
embedded in rodent hippocampal ensemble spike activity. Here we extend our
previous work and propose a nonparametric Bayesian approach to infer rat
hippocampal population codes during spatial navigation. To tackle the model
selection problem, we leverage a nonparametric Bayesian model. Specifically, to
analyze rat hippocampal ensemble spiking activity, we apply a hierarchical
Dirichlet process-hidden Markov model (HDP-HMM) using two Bayesian inference
methods, one based on Markov chain Monte Carlo (MCMC) and the other based on
variational Bayes (VB). We demonstrate the effectiveness of our Bayesian
approaches on recordings from a freely-behaving rat navigating in an open field
environment. We find that MCMC-based inference with Hamiltonian Monte Carlo
(HMC) hyperparameter sampling is flexible and efficient, and outperforms VB and
MCMC approaches with hyperparameters set by empirical Bayes
Recommended from our members
Composing Deep Learning and Bayesian Nonparametric Methods
Recent progress in Bayesian methods largely focus on non-conjugate models featured with extensive use of black-box functions: continuous functions implemented with neural networks. Using deep neural networks, Bayesian models can reasonably fit big data while at the same time capturing model uncertainty. This thesis targets at a more challenging problem: how do we model general random objects, including discrete ones, using random functions? Our conclusion is: many (discrete) random objects are in nature a composition of Poisson processes and random functions}. Thus, all discreteness is handled through the Poisson process while random functions captures the rest complexities of the object. Thus the title: composing deep learning and Bayesian nonparametric methods.
This conclusion is not a conjecture. In spacial cases such as latent feature models , we can prove this claim by working on infinite dimensional spaces, and that is how Bayesian nonparametric kicks in. Moreover, we will assume some regularity assumptions on random objects such as exchangeability. Then the representations will show up magically using representation theorems. We will see this two times throughout this thesis.
One may ask: when a random object is too simple, such as a non-negative random vector in the case of latent feature models, how can we exploit exchangeability? The answer is to aggregate infinite random objects and map them altogether onto an infinite dimensional space. And then assume exchangeability on the infinite dimensional space. We demonstrate two examples of latent feature models by (1) concatenating them as an infinite sequence (Section 2,3) and (2) stacking them as a 2d array (Section 4).
Besides, we will see that Bayesian nonparametric methods are useful to model discrete patterns in time series data. We will showcase two examples: (1) using variance Gamma processes to model change points (Section 5), and (2) using Chinese restaurant processes to model speech with switching speakers (Section 6).
We also aware that the inference problem can be non-trivial in popular Bayesian nonparametric models. In Section 7, we find a novel solution of online inference for the popular HDP-HMM model
A nonparametric HMM for genetic imputation and coalescent inference
Genetic sequence data are well described by hidden Markov models (HMMs) in
which latent states correspond to clusters of similar mutation patterns. Theory
from statistical genetics suggests that these HMMs are nonhomogeneous (their
transition probabilities vary along the chromosome) and have large support for
self transitions. We develop a new nonparametric model of genetic sequence
data, based on the hierarchical Dirichlet process, which supports these self
transitions and nonhomogeneity. Our model provides a parameterization of the
genetic process that is more parsimonious than other more general nonparametric
models which have previously been applied to population genetics. We provide
truncation-free MCMC inference for our model using a new auxiliary sampling
scheme for Bayesian nonparametric HMMs. In a series of experiments on male X
chromosome data from the Thousand Genomes Project and also on data simulated
from a population bottleneck we show the benefits of our model over the popular
finite model fastPHASE, which can itself be seen as a parametric truncation of
our model. We find that the number of HMM states found by our model is
correlated with the time to the most recent common ancestor in population
bottlenecks. This work demonstrates the flexibility of Bayesian nonparametrics
applied to large and complex genetic data
- …