2,623 research outputs found

    Langevin and Hamiltonian based Sequential MCMC for Efficient Bayesian Filtering in High-dimensional Spaces

    Full text link
    Nonlinear non-Gaussian state-space models arise in numerous applications in statistics and signal processing. In this context, one of the most successful and popular approximation techniques is the Sequential Monte Carlo (SMC) algorithm, also known as particle filtering. Nevertheless, this method tends to be inefficient when applied to high dimensional problems. In this paper, we focus on another class of sequential inference methods, namely the Sequential Markov Chain Monte Carlo (SMCMC) techniques, which represent a promising alternative to SMC methods. After providing a unifying framework for the class of SMCMC approaches, we propose novel efficient strategies based on the principle of Langevin diffusion and Hamiltonian dynamics in order to cope with the increasing number of high-dimensional applications. Simulation results show that the proposed algorithms achieve significantly better performance compared to existing algorithms

    Evaluating Structural Models for the U.S. Short Rate Using EMM and Particle Filters

    Get PDF
    We combine the efficient method of moments with appropriate algorithms from the optimal filtering literature to study a collection of models for the U.S. short rate. Our models include two continuous-time stochastic volatility models and two regime switching models, which provided the best fit in previous work that examined a large collection of models. The continuous-time stochastic volatility models fall into the class of nonlinear, non-Gaussian state space models for which we apply particle filtering and smoothing algorithms. Our results demonstrate the effectiveness of the particle filter for continuous-time processes. Our analysis also provides an alternative and complementary approach to the reprojection technique of Gallant and Tauchen (1998) for studying the dynamics of volatility.

    Inverse Problems and Data Assimilation

    Full text link
    These notes are designed with the aim of providing a clear and concise introduction to the subjects of Inverse Problems and Data Assimilation, and their inter-relations, together with citations to some relevant literature in this area. The first half of the notes is dedicated to studying the Bayesian framework for inverse problems. Techniques such as importance sampling and Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the desirable property that in the limit of an infinite number of samples they reproduce the full posterior distribution. Since it is often computationally intensive to implement these methods, especially in high dimensional problems, approximate techniques such as approximating the posterior by a Dirac or a Gaussian distribution are discussed. The second half of the notes cover data assimilation. This refers to a particular class of inverse problems in which the unknown parameter is the initial condition of a dynamical system, and in the stochastic dynamics case the subsequent states of the system, and the data comprises partial and noisy observations of that (possibly stochastic) dynamical system. We will also demonstrate that methods developed in data assimilation may be employed to study generic inverse problems, by introducing an artificial time to generate a sequence of probability measures interpolating from the prior to the posterior

    Inverse Modeling for MEG/EEG data

    Full text link
    We provide an overview of the state-of-the-art for mathematical methods that are used to reconstruct brain activity from neurophysiological data. After a brief introduction on the mathematics of the forward problem, we discuss standard and recently proposed regularization methods, as well as Monte Carlo techniques for Bayesian inference. We classify the inverse methods based on the underlying source model, and discuss advantages and disadvantages. Finally we describe an application to the pre-surgical evaluation of epileptic patients.Comment: 15 pages, 1 figur

    Dynamic Compressive Sensing of Time-Varying Signals via Approximate Message Passing

    Full text link
    In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal filtering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation-maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCS-AMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCS-AMP is capable of offering state-of-the-art performance and speed on real-world high-dimensional problems.Comment: 32 pages, 7 figure
    • ā€¦
    corecore