486 research outputs found

    SMCTC : sequential Monte Carlo in C++

    Get PDF
    Sequential Monte Carlo methods are a very general class of Monte Carlo methods for sampling from sequences of distributions. Simple examples of these algorithms are used very widely in the tracking and signal processing literature. Recent developments illustrate that these techniques have much more general applicability, and can be applied very effectively to statistical inference problems. Unfortunately, these methods are often perceived as being computationally expensive and difficult to implement. This article seeks to address both of these problems. A C++ template class library for the efficient and convenient implementation of very general Sequential Monte Carlo algorithms is presented. Two example applications are provided: a simple particle filter for illustrative purposes and a state-of-the-art algorithm for rare event estimation

    SMCTC: Sequential Monte Carlo in C++

    Get PDF
    Sequential Monte Carlo methods are a very general class of Monte Carlo methods for sampling from sequences of distributions. Simple examples of these algorithms are used very widely in the tracking and signal processing literature. Recent developments illustrate that these techniques have much more general applicability, and can be applied very effectively to statistical inference problems. Unfortunately, these methods are often perceived as being computationally expensive and difficult to implement. This article seeks to address both of these problems. A C++ template class library for the efficient and convenient implementation of very general Sequential Monte Carlo algorithms is presented. Two example applications are provided: a simple particle filter for illustrative purposes and a state-of-the-art algorithm for rare event estimation.

    Pointwise Convergence in Probability of General Smoothing Splines

    Get PDF
    Establishing the convergence of splines can be cast as a variational problem which is amenable to a Γ\Gamma-convergence approach. We consider the case in which the regularization coefficient scales with the number of observations, nn, as λn=np\lambda_n=n^{-p}. Using standard theorems from the Γ\Gamma-convergence literature, we prove that the general spline model is consistent in that estimators converge in a sense slightly weaker than weak convergence in probability for p12p\leq \frac{1}{2}. Without further assumptions we show this rate is sharp. This differs from rates for strong convergence using Hilbert scales where one can often choose p>12p>\frac{1}{2}

    Convergence and Rates for Fixed-Interval Multiple-Track Smoothing Using kk-Means Type Optimization

    Get PDF
    We address the task of estimating multiple trajectories from unlabeled data. This problem arises in many settings, one could think of the construction of maps of transport networks from passive observation of travellers, or the reconstruction of the behaviour of uncooperative vehicles from external observations, for example. There are two coupled problems. The first is a data association problem: how to map data points onto individual trajectories. The second is, given a solution to the data association problem, to estimate those trajectories. We construct estimators as a solution to a regularized variational problem (to which approximate solutions can be obtained via the simple, efficient and widespread kk-means method) and show that, as the number of data points, nn, increases, these estimators exhibit stable behaviour. More precisely, we show that they converge in an appropriate Sobolev space in probability and with rate n1/2n^{-1/2}

    A Simple Approach to Maximum Intractable Likelihood Estimation

    Get PDF
    Approximate Bayesian Computation (ABC) can be viewed as an analytic approximation of an intractable likelihood coupled with an elementary simulation step. Such a view, combined with a suitable instrumental prior distribution permits maximum-likelihood (or maximum-a-posteriori) inference to be conducted, approximately, using essentially the same techniques. An elementary approach to this problem which simply obtains a nonparametric approximation of the likelihood surface which is then used as a smooth proxy for the likelihood in a subsequent maximisation step is developed here and the convergence of this class of algorithms is characterised theoretically. The use of non-sufficient summary statistics in this context is considered. Applying the proposed method to four problems demonstrates good performance. The proposed approach provides an alternative for approximating the maximum likelihood estimator (MLE) in complex scenarios

    The iterated auxiliary particle filter

    Get PDF
    We present an offline, iterated particle filter to facilitate statistical inference in general state space hidden Markov models. Given a model and a sequence of observations, the associated marginal likelihood L is central to likelihood-based inference for unknown statistical parameters. We define a class of "twisted" models: each member is specified by a sequence of positive functions psi and has an associated psi-auxiliary particle filter that provides unbiased estimates of L. We identify a sequence psi* that is optimal in the sense that the psi*-auxiliary particle filter's estimate of L has zero variance. In practical applications, psi* is unknown so the psi*-auxiliary particle filter cannot straightforwardly be implemented. We use an iterative scheme to approximate psi*, and demonstrate empirically that the resulting iterated auxiliary particle filter significantly outperforms the bootstrap particle filter in challenging settings. Applications include parameter estimation using a particle Markov chain Monte Carlo algorithm

    On blocks, tempering and particle MCMC for systems identification

    Get PDF
    The widespread use of particle methods for addressing the filtering and smoothing problems in state-space models has, in recent years, been complemented by the development of particle Markov Chain Monte Carlo (PMCMC) methods. PMCMC uses particle filters within offline systems-identification settings. We develop a modified particle filter, based around block sampling and tempering, intended to improve their exploration of the state space and the associated estimation of the marginal likelihood. The aim is to develop particle methods with improved robustness properties, particularly for parameter values which are not able to explain observed data well, for use within PMCMC algorithms. The proposed strategies do not require a substantial analytic understanding of the model structure, unlike most techniques for improving particle-filter performance

    Maximum likelihood parameter estimation for latent variable models using sequential Monte Carlo

    Get PDF
    We present a sequential Monte Carlo (SMC) method for maximum likelihood (ML) parameter estimation in latent variable models. Standard methods rely on gradient algorithms such as the Expectation- Maximization (EM) algorithm and its Monte Carlo variants. Our approach is different and motivated by similar considerations to simulated annealing (SA); that is we propose to sample from a sequence of artificial distributions whose support concentrates itself on the set of ML estimates. To achieve this we use SMC methods. We conclude by presenting simulation results on a toy problem and a nonlinear non-Gaussian time series model

    Convergence of the kk-Means Minimization Problem using Γ\Gamma-Convergence

    Full text link
    The kk-means method is an iterative clustering algorithm which associates each observation with one of kk clusters. It traditionally employs cluster centers in the same space as the observed data. By relaxing this requirement, it is possible to apply the kk-means method to infinite dimensional problems, for example multiple target tracking and smoothing problems in the presence of unknown data association. Via a Γ\Gamma-convergence argument, the associated optimization problem is shown to converge in the sense that both the kk-means minimum and minimizers converge in the large data limit to quantities which depend upon the observed data only through its distribution. The theory is supplemented with two examples to demonstrate the range of problems now accessible by the kk-means method. The first example combines a non-parametric smoothing problem with unknown data association. The second addresses tracking using sparse data from a network of passive sensors

    Bayesian model comparison for compartmental models with applications in positron emission tomography

    Get PDF
    We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET
    corecore