8,994 research outputs found
Variational Hamiltonian Monte Carlo via Score Matching
Traditionally, the field of computational Bayesian statistics has been
divided into two main subfields: variational methods and Markov chain Monte
Carlo (MCMC). In recent years, however, several methods have been proposed
based on combining variational Bayesian inference and MCMC simulation in order
to improve their overall accuracy and computational efficiency. This marriage
of fast evaluation and flexible approximation provides a promising means of
designing scalable Bayesian inference methods. In this paper, we explore the
possibility of incorporating variational approximation into a state-of-the-art
MCMC method, Hamiltonian Monte Carlo (HMC), to reduce the required gradient
computation in the simulation of Hamiltonian flow, which is the bottleneck for
many applications of HMC in big data problems. To this end, we use a {\it
free-form} approximation induced by a fast and flexible surrogate function
based on single-hidden layer feedforward neural networks. The surrogate
provides sufficiently accurate approximation while allowing for fast
exploration of parameter space, resulting in an efficient approximate inference
algorithm. We demonstrate the advantages of our method on both synthetic and
real data problems
Stochastic Collapsed Variational Inference for Sequential Data
Stochastic variational inference for collapsed models has recently been
successfully applied to large scale topic modelling. In this paper, we propose
a stochastic collapsed variational inference algorithm in the sequential data
setting. Our algorithm is applicable to both finite hidden Markov models and
hierarchical Dirichlet process hidden Markov models, and to any datasets
generated by emission distributions in the exponential family. Our experiment
results on two discrete datasets show that our inference is both more efficient
and more accurate than its uncollapsed version, stochastic variational
inference.Comment: NIPS Workshop on Advances in Approximate Bayesian Inference, 201
A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms
The benefits of automating design cycles for Bayesian inference-based
algorithms are becoming increasingly recognized by the machine learning
community. As a result, interest in probabilistic programming frameworks has
much increased over the past few years. This paper explores a specific
probabilistic programming paradigm, namely message passing in Forney-style
factor graphs (FFGs), in the context of automated design of efficient Bayesian
signal processing algorithms. To this end, we developed "ForneyLab"
(https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message
passing-based inference in FFGs. We show by example how ForneyLab enables
automatic derivation of Bayesian signal processing algorithms, including
algorithms for parameter estimation and model comparison. Crucially, due to the
modular makeup of the FFG framework, both the model specification and inference
methods are readily extensible in ForneyLab. In order to test this framework,
we compared variational message passing as implemented by ForneyLab with
automatic differentiation variational inference (ADVI) and Monte Carlo methods
as implemented by state-of-the-art tools "Edward" and "Stan". In terms of
performance, extensibility and stability issues, ForneyLab appears to enjoy an
edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate
Reasonin
Hierarchically-coupled hidden Markov models for learning kinetic rates from single-molecule data
We address the problem of analyzing sets of noisy time-varying signals that
all report on the same process but confound straightforward analyses due to
complex inter-signal heterogeneities and measurement artifacts. In particular
we consider single-molecule experiments which indirectly measure the distinct
steps in a biomolecular process via observations of noisy time-dependent
signals such as a fluorescence intensity or bead position. Straightforward
hidden Markov model (HMM) analyses attempt to characterize such processes in
terms of a set of conformational states, the transitions that can occur between
these states, and the associated rates at which those transitions occur; but
require ad-hoc post-processing steps to combine multiple signals. Here we
develop a hierarchically coupled HMM that allows experimentalists to deal with
inter-signal variability in a principled and automatic way. Our approach is a
generalized expectation maximization hyperparameter point estimation procedure
with variational Bayes at the level of individual time series that learns an
single interpretable representation of the overall data generating process.Comment: 9 pages, 5 figure
- …