35,131 research outputs found
Bayesian analysis on mixture models, for understanding the process of myosin binding to the thin filament
Understanding how access is granted to myosin by the actin thin filament has not been fully understood yet. The process of thin filament activation is explored by developing a new variation of hidden Markov models to extract dynamic information from image data and to establish how many myosins are present in an activated region against time. Hidden Markov models supply an extension to mixture models in such a way that they allow for spatial data. The novelty lies in the model allowing for spatial information in the image to be encoded through contextual constraints of a neighbourhood structure based on three nearest neighbours. Furthermore, for the purpose of Bayesian inference about the unknown number of K components, the Metropolis-Hastings algorithm is employed.
The Bayesian analysis shows that, when compared to reversible jump Markov chain Monte Carlo, our proposed model provides a better alternative for the finite mixture model at capturing the behaviour of myosin binding to the thin filament. The estimated mean intensity values of uorescence from both models are exemplified in separate kymographs, where the variation in light intensity gives us information about how the myosin binding phenomenon is clustered or varies over time
Introduction to finite mixtures
Mixture models have been around for over 150 years, as an intuitively simple
and practical tool for enriching the collection of probability distributions
available for modelling data. In this chapter we describe the basic ideas of
the subject, present several alternative representations and perspectives on
these models, and discuss some of the elements of inference about the unknowns
in the models. Our focus is on the simplest set-up, of finite mixture models,
but we discuss also how various simplifying assumptions can be relaxed to
generate the rich landscape of modelling and inference ideas traversed in the
rest of this book.Comment: 14 pages, 7 figures, A chapter prepared for the forthcoming Handbook
of Mixture Analysis. V2 corrects a small but important typographical error,
and makes other minor edits; V3 makes further minor corrections and updates
following review; V4 corrects algorithmic details in sec 4.1 and 4.2, and
removes typo
Latent tree models
Latent tree models are graphical models defined on trees, in which only a
subset of variables is observed. They were first discussed by Judea Pearl as
tree-decomposable distributions to generalise star-decomposable distributions
such as the latent class model. Latent tree models, or their submodels, are
widely used in: phylogenetic analysis, network tomography, computer vision,
causal modeling, and data clustering. They also contain other well-known
classes of models like hidden Markov models, Brownian motion tree model, the
Ising model on a tree, and many popular models used in phylogenetics. This
article offers a concise introduction to the theory of latent tree models. We
emphasise the role of tree metrics in the structural description of this model
class, in designing learning algorithms, and in understanding fundamental
limits of what and when can be learned
A Nonparametric Bayesian Approach to Uncovering Rat Hippocampal Population Codes During Spatial Navigation
Rodent hippocampal population codes represent important spatial information
about the environment during navigation. Several computational methods have
been developed to uncover the neural representation of spatial topology
embedded in rodent hippocampal ensemble spike activity. Here we extend our
previous work and propose a nonparametric Bayesian approach to infer rat
hippocampal population codes during spatial navigation. To tackle the model
selection problem, we leverage a nonparametric Bayesian model. Specifically, to
analyze rat hippocampal ensemble spiking activity, we apply a hierarchical
Dirichlet process-hidden Markov model (HDP-HMM) using two Bayesian inference
methods, one based on Markov chain Monte Carlo (MCMC) and the other based on
variational Bayes (VB). We demonstrate the effectiveness of our Bayesian
approaches on recordings from a freely-behaving rat navigating in an open field
environment. We find that MCMC-based inference with Hamiltonian Monte Carlo
(HMC) hyperparameter sampling is flexible and efficient, and outperforms VB and
MCMC approaches with hyperparameters set by empirical Bayes
- …