1,142 research outputs found
MCMC implementation for Bayesian hidden semi-Markov models with illustrative applications
Copyright © Springer 2013. The final publication is available at Springer via http://dx.doi.org/10.1007/s11222-013-9399-zHidden Markov models (HMMs) are flexible, well established models useful in a diverse range of applications.
However, one potential limitation of such models lies in their inability to explicitly structure the holding times of each hidden state. Hidden semi-Markov models (HSMMs) are more useful in the latter respect as they incorporate additional temporal structure by explicit modelling of the holding times. However, HSMMs have generally received less attention in the literature, mainly due to their intensive computational requirements. Here a Bayesian implementation of HSMMs is presented. Recursive algorithms are proposed in conjunction with Metropolis-Hastings in such a way as to avoid sampling from the distribution of the hidden state sequence in the MCMC sampler. This provides a computationally tractable estimation framework for HSMMs avoiding the limitations associated with the conventional EM algorithm regarding model flexibility. Performance of the proposed implementation is demonstrated through simulation experiments as well as an illustrative application relating to recurrent failures in a network of underground water pipes where random effects are also included into the HSMM to allow for pipe heterogeneity
Approximate Bayesian Computation for a Class of Time Series Models
In the following article we consider approximate Bayesian computation (ABC)
for certain classes of time series models. In particular, we focus upon
scenarios where the likelihoods of the observations and parameter are
intractable, by which we mean that one cannot evaluate the likelihood even
up-to a positive unbiased estimate. This paper reviews and develops a class of
approximation procedures based upon the idea of ABC, but, specifically
maintains the probabilistic structure of the original statistical model. This
idea is useful, in that it can facilitate an analysis of the bias of the
approximation and the adaptation of established computational methods for
parameter inference. Several existing results in the literature are surveyed
and novel developments with regards to computation are given
Bayesian modelling of recurrent pipe failures in urban water systems using non-homogeneous Poisson processes with latent structure
Recurrent events are very common in a wide range of scientific
disciplines. The majority of statistical models developed to
characterise recurrent events are derived from either
reliability theory or survival analysis. This thesis concentrates on
applications that arise from reliability, which in general involve
the study about components or devices where the recurring
event is failure.
Specifically, interest lies in repairable components that
experience a number of failures during their lifetime. The goal is to
develop statistical models in order to gain a good understanding
about the driving force behind the failures. A particular counting
process is adopted, the non-homogenous Poisson process (NHPP),
where the rate of occurrence (failure rate) depends on time. The
primary application considered in the thesis is the prediction
of underground water pipe bursts although the methods described have
more general scope.
First, a Bayesian mixed effects NHPP model is developed and applied to a
network of water pipes using MCMC. The model is then extended
to a mixture of NHPPs. Further, a special mixture case, the
zero-inflated NHPP model is developed to cope with data
involving a large number of pipes that have never failed. The
zero-inflated model is applied to the same pipe network.
Quite often, data involving recurrent failures over time, are
aggregated where for instance the times of failures are unknown
and only the total number of failures are available. Aggregated
versions of the NHPP model and its zero-inflated version are
developed to accommodate aggregated data and these are applied to
the aggregated version of the earlier data set.
Complex devices in random environments often exhibit what may be
termed as state changes in their behaviour. These state changes may
be caused by unobserved and possibly non-stationary processes such
as severe weather changes. A hidden semi-Markov NHPP model is
formulated, which is a NHPP process modulated by an unobserved semi-Markov process.
An algorithm is developed to evaluate the likelihood of this model and a
Metropolis-Hastings sampler is constructed for parameter estimation. Simulation studies
are performed to test implementation and finally an illustrative application of the model
is presented.
The thesis concludes with a general discussion and a list of possible generalisations and extensions
as well as possible applications other than the ones considered
Fast MCMC sampling for Markov jump processes and extensions
Markov jump processes (or continuous-time Markov chains) are a simple and
important class of continuous-time dynamical systems. In this paper, we tackle
the problem of simulating from the posterior distribution over paths in these
models, given partial and noisy observations. Our approach is an auxiliary
variable Gibbs sampler, and is based on the idea of uniformization. This sets
up a Markov chain over paths by alternately sampling a finite set of virtual
jump times given the current path and then sampling a new path given the set of
extant and virtual jump times using a standard hidden Markov model forward
filtering-backward sampling algorithm. Our method is exact and does not involve
approximations like time-discretization. We demonstrate how our sampler extends
naturally to MJP-based models like Markov-modulated Poisson processes and
continuous-time Bayesian networks and show significant computational benefits
over state-of-the-art MCMC samplers for these models.Comment: Accepted at the Journal of Machine Learning Research (JMLR
A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms
The benefits of automating design cycles for Bayesian inference-based
algorithms are becoming increasingly recognized by the machine learning
community. As a result, interest in probabilistic programming frameworks has
much increased over the past few years. This paper explores a specific
probabilistic programming paradigm, namely message passing in Forney-style
factor graphs (FFGs), in the context of automated design of efficient Bayesian
signal processing algorithms. To this end, we developed "ForneyLab"
(https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message
passing-based inference in FFGs. We show by example how ForneyLab enables
automatic derivation of Bayesian signal processing algorithms, including
algorithms for parameter estimation and model comparison. Crucially, due to the
modular makeup of the FFG framework, both the model specification and inference
methods are readily extensible in ForneyLab. In order to test this framework,
we compared variational message passing as implemented by ForneyLab with
automatic differentiation variational inference (ADVI) and Monte Carlo methods
as implemented by state-of-the-art tools "Edward" and "Stan". In terms of
performance, extensibility and stability issues, ForneyLab appears to enjoy an
edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate
Reasonin
- …