2,525 research outputs found
A class of fast exact Bayesian filters in dynamical models with jumps
In this paper, we focus on the statistical filtering problem in dynamical
models with jumps. When a particular application relies on physical properties
which are modeled by linear and Gaussian probability density functions with
jumps, an usualmethod consists in approximating the optimal Bayesian estimate
(in the sense of the Minimum Mean Square Error (MMSE)) in a linear and Gaussian
Jump Markov State Space System (JMSS). Practical solutions include algorithms
based on numerical approximations or based on Sequential Monte Carlo (SMC)
methods. In this paper, we propose a class of alternative methods which
consists in building statistical models which share the same physical
properties of interest but in which the computation of the optimal MMSE
estimate can be done at a computational cost which is linear in the number of
observations.Comment: 21 pages, 7 figure
Approximate Bayesian Computation for a Class of Time Series Models
In the following article we consider approximate Bayesian computation (ABC)
for certain classes of time series models. In particular, we focus upon
scenarios where the likelihoods of the observations and parameter are
intractable, by which we mean that one cannot evaluate the likelihood even
up-to a positive unbiased estimate. This paper reviews and develops a class of
approximation procedures based upon the idea of ABC, but, specifically
maintains the probabilistic structure of the original statistical model. This
idea is useful, in that it can facilitate an analysis of the bias of the
approximation and the adaptation of established computational methods for
parameter inference. Several existing results in the literature are surveyed
and novel developments with regards to computation are given
Long-term stability of sequential Monte Carlo methods under verifiable conditions
This paper discusses particle filtering in general hidden Markov models
(HMMs) and presents novel theoretical results on the long-term stability of
bootstrap-type particle filters. More specifically, we establish that the
asymptotic variance of the Monte Carlo estimates produced by the bootstrap
filter is uniformly bounded in time. On the contrary to most previous results
of this type, which in general presuppose that the state space of the hidden
state process is compact (an assumption that is rarely satisfied in practice),
our very mild assumptions are satisfied for a large class of HMMs with possibly
noncompact state space. In addition, we derive a similar time uniform bound on
the asymptotic error. Importantly, our results hold for
misspecified models; that is, we do not at all assume that the data entering
into the particle filter originate from the model governing the dynamics of the
particles or not even from an HMM.Comment: Published in at http://dx.doi.org/10.1214/13-AAP962 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Simulation in Statistics
Simulation has become a standard tool in statistics because it may be the
only tool available for analysing some classes of probabilistic models. We
review in this paper simulation tools that have been specifically derived to
address statistical challenges and, in particular, recent advances in the areas
of adaptive Markov chain Monte Carlo (MCMC) algorithms, and approximate
Bayesian calculation (ABC) algorithms.Comment: Draft of an advanced tutorial paper for the Proceedings of the 2011
Winter Simulation Conferenc
Global consensus Monte Carlo
To conduct Bayesian inference with large data sets, it is often convenient or
necessary to distribute the data across multiple machines. We consider a
likelihood function expressed as a product of terms, each associated with a
subset of the data. Inspired by global variable consensus optimisation, we
introduce an instrumental hierarchical model associating auxiliary statistical
parameters with each term, which are conditionally independent given the
top-level parameters. One of these top-level parameters controls the
unconditional strength of association between the auxiliary parameters. This
model leads to a distributed MCMC algorithm on an extended state space yielding
approximations of posterior expectations. A trade-off between computational
tractability and fidelity to the original model can be controlled by changing
the association strength in the instrumental model. We further propose the use
of a SMC sampler with a sequence of association strengths, allowing both the
automatic determination of appropriate strengths and for a bias correction
technique to be applied. In contrast to similar distributed Monte Carlo
algorithms, this approach requires few distributional assumptions. The
performance of the algorithms is illustrated with a number of simulated
examples
Sequential Quasi-Monte Carlo
We derive and study SQMC (Sequential Quasi-Monte Carlo), a class of
algorithms obtained by introducing QMC point sets in particle filtering. SQMC
is related to, and may be seen as an extension of, the array-RQMC algorithm of
L'Ecuyer et al. (2006). The complexity of SQMC is , where is
the number of simulations at each iteration, and its error rate is smaller than
the Monte Carlo rate . The only requirement to implement SQMC is
the ability to write the simulation of particle given as a
deterministic function of and a fixed number of uniform variates.
We show that SQMC is amenable to the same extensions as standard SMC, such as
forward smoothing, backward smoothing, unbiased likelihood evaluation, and so
on. In particular, SQMC may replace SMC within a PMCMC (particle Markov chain
Monte Carlo) algorithm. We establish several convergence results. We provide
numerical evidence that SQMC may significantly outperform SMC in practical
scenarios.Comment: 55 pages, 10 figures (final version
Accelerating delayed-acceptance Markov chain Monte Carlo algorithms
Delayed-acceptance Markov chain Monte Carlo (DA-MCMC) samples from a
probability distribution via a two-stages version of the Metropolis-Hastings
algorithm, by combining the target distribution with a "surrogate" (i.e. an
approximate and computationally cheaper version) of said distribution. DA-MCMC
accelerates MCMC sampling in complex applications, while still targeting the
exact distribution. We design a computationally faster, albeit approximate,
DA-MCMC algorithm. We consider parameter inference in a Bayesian setting where
a surrogate likelihood function is introduced in the delayed-acceptance scheme.
When the evaluation of the likelihood function is computationally intensive,
our scheme produces a 2-4 times speed-up, compared to standard DA-MCMC.
However, the acceleration is highly problem dependent. Inference results for
the standard delayed-acceptance algorithm and our approximated version are
similar, indicating that our algorithm can return reliable Bayesian inference.
As a computationally intensive case study, we introduce a novel stochastic
differential equation model for protein folding data.Comment: 40 pages, 21 figures, 10 table
- …