191 research outputs found

    Unbiased and Consistent Nested Sampling via Sequential Monte Carlo

    Full text link
    We introduce a new class of sequential Monte Carlo methods called Nested Sampling via Sequential Monte Carlo (NS-SMC), which reframes the Nested Sampling method of Skilling (2006) in terms of sequential Monte Carlo techniques. This new framework allows convergence results to be obtained in the setting when Markov chain Monte Carlo (MCMC) is used to produce new samples. An additional benefit is that marginal likelihood estimates are unbiased. In contrast to NS, the analysis of NS-SMC does not require the (unrealistic) assumption that the simulated samples be independent. As the original NS algorithm is a special case of NS-SMC, this provides insights as to why NS seems to produce accurate estimates despite a typical violation of its assumptions. For applications of NS-SMC, we give advice on tuning MCMC kernels in an automated manner via a preliminary pilot run, and present a new method for appropriately choosing the number of MCMC repeats at each iteration. Finally, a numerical study is conducted where the performance of NS-SMC and temperature-annealed SMC is compared on several challenging and realistic problems. MATLAB code for our experiments is made available at https://github.com/LeahPrice/SMC-NS .Comment: 45 pages, some minor typographical errors fixed since last versio

    Multilevel Sequential Monte Carlo with Dimension-Independent Likelihood-Informed Proposals

    Full text link
    In this article we develop a new sequential Monte Carlo (SMC) method for multilevel (ML) Monte Carlo estimation. In particular, the method can be used to estimate expectations with respect to a target probability distribution over an infinite-dimensional and non-compact space as given, for example, by a Bayesian inverse problem with Gaussian random field prior. Under suitable assumptions the MLSMC method has the optimal O(ϵ2)O(\epsilon^{-2}) bound on the cost to obtain a mean-square error of O(ϵ2)O(\epsilon^2). The algorithm is accelerated by dimension-independent likelihood-informed (DILI) proposals designed for Gaussian priors, leveraging a novel variation which uses empirical sample covariance information in lieu of Hessian information, hence eliminating the requirement for gradient evaluations. The efficiency of the algorithm is illustrated on two examples: inversion of noisy pressure measurements in a PDE model of Darcy flow to recover the posterior distribution of the permeability field, and inversion of noisy measurements of the solution of an SDE to recover the posterior path measure

    A Multilevel Approach for Stochastic Nonlinear Optimal Control

    Full text link
    We consider a class of finite time horizon nonlinear stochastic optimal control problem, where the control acts additively on the dynamics and the control cost is quadratic. This framework is flexible and has found applications in many domains. Although the optimal control admits a path integral representation for this class of control problems, efficient computation of the associated path integrals remains a challenging Monte Carlo task. The focus of this article is to propose a new Monte Carlo approach that significantly improves upon existing methodology. Our proposed methodology first tackles the issue of exponential growth in variance with the time horizon by casting optimal control estimation as a smoothing problem for a state space model associated with the control problem, and applying smoothing algorithms based on particle Markov chain Monte Carlo. To further reduce computational cost, we then develop a multilevel Monte Carlo method which allows us to obtain an estimator of the optimal control with O(ϵ2)\mathcal{O}(\epsilon^2) mean squared error with a computational cost of O(ϵ2log(ϵ)2)\mathcal{O}(\epsilon^{-2}\log(\epsilon)^2). In contrast, a computational cost of O(ϵ3)\mathcal{O}(\epsilon^{-3}) is required for existing methodology to achieve the same mean squared error. Our approach is illustrated on two numerical examples, which validate our theory

    An invitation to sequential Monte Carlo samplers

    Full text link
    Sequential Monte Carlo samplers provide consistent approximations of sequences of probability distributions and of their normalizing constants, via particles obtained with a combination of importance weights and Markov transitions. This article presents this class of methods and a number of recent advances, with the goal of helping statisticians assess the applicability and usefulness of these methods for their purposes. Our presentation emphasizes the role of bridging distributions for computational and statistical purposes. Numerical experiments are provided on simple settings such as multivariate Normals, logistic regression and a basic susceptible-infected-recovered model, illustrating the impact of the dimension, the ability to perform inference sequentially and the estimation of normalizing constants.Comment: review article, 34 pages, 10 figure

    Estimation and uncertainty quantification for the output from quantum simulators

    Full text link
    The problem of estimating certain distributions over {0,1}d\{0,1\}^d is considered here. The distribution represents a quantum system of dd qubits, where there are non-trivial dependencies between the qubits. A maximum entropy approach is adopted to reconstruct the distribution from exact moments or observed empirical moments. The Robbins Monro algorithm is used to solve the intractable maximum entropy problem, by constructing an unbiased estimator of the un-normalized target with a sequential Monte Carlo sampler at each iteration. In the case of empirical moments, this coincides with a maximum likelihood estimator. A Bayesian formulation is also considered in order to quantify posterior uncertainty. Several approaches are proposed in order to tackle this challenging problem, based on recently developed methodologies. In particular, unbiased estimators of the gradient of the log posterior are constructed and used within a provably convergent Langevin-based Markov chain Monte Carlo method. The methods are illustrated on classically simulated output from quantum simulators

    A randomized Multi-index sequential Monte Carlo method

    Full text link
    We consider the problem of estimating expectations with respect to a target distribution with an unknown normalizing constant, and where even the unnormalized target needs to be approximated at finite resolution. Under such an assumption, this work builds upon a recently introduced multi-index Sequential Monte Carlo (SMC) ratio estimator, which provably enjoys the complexity improvements of multi-index Monte Carlo (MIMC) and the efficiency of SMC for inference. The present work leverages a randomization strategy to remove bias entirely, which simplifies estimation substantially, particularly in the MIMC context, where the choice of index set is otherwise important. Under reasonable assumptions, the proposed method provably achieves the same canonical complexity of MSE^(-1) as the original method, but without discretization bias. It is illustrated on examples of Bayesian inverse problems.Comment: 26 pages 6 figure
    corecore