32,845 research outputs found

    Robust Covariance Adaptation in Adaptive Importance Sampling

    Full text link
    Importance sampling (IS) is a Monte Carlo methodology that allows for approximation of a target distribution using weighted samples generated from another proposal distribution. Adaptive importance sampling (AIS) implements an iterative version of IS which adapts the parameters of the proposal distribution in order to improve estimation of the target. While the adaptation of the location (mean) of the proposals has been largely studied, an important challenge of AIS relates to the difficulty of adapting the scale parameter (covariance matrix). In the case of weight degeneracy, adapting the covariance matrix using the empirical covariance results in a singular matrix, which leads to poor performance in subsequent iterations of the algorithm. In this paper, we propose a novel scheme which exploits recent advances in the IS literature to prevent the so-called weight degeneracy. The method efficiently adapts the covariance matrix of a population of proposal distributions and achieves a significant performance improvement in high-dimensional scenarios. We validate the new method through computer simulations

    Efficient Sequential Monte-Carlo Samplers for Bayesian Inference

    Full text link
    In many problems, complex non-Gaussian and/or nonlinear models are required to accurately describe a physical system of interest. In such cases, Monte Carlo algorithms are remarkably flexible and extremely powerful approaches to solve such inference problems. However, in the presence of a high-dimensional and/or multimodal posterior distribution, it is widely documented that standard Monte-Carlo techniques could lead to poor performance. In this paper, the study is focused on a Sequential Monte-Carlo (SMC) sampler framework, a more robust and efficient Monte Carlo algorithm. Although this approach presents many advantages over traditional Monte-Carlo methods, the potential of this emergent technique is however largely underexploited in signal processing. In this work, we aim at proposing some novel strategies that will improve the efficiency and facilitate practical implementation of the SMC sampler specifically for signal processing applications. Firstly, we propose an automatic and adaptive strategy that selects the sequence of distributions within the SMC sampler that minimizes the asymptotic variance of the estimator of the posterior normalization constant. This is critical for performing model selection in modelling applications in Bayesian signal processing. The second original contribution we present improves the global efficiency of the SMC sampler by introducing a novel correction mechanism that allows the use of the particles generated through all the iterations of the algorithm (instead of only particles from the last iteration). This is a significant contribution as it removes the need to discard a large portion of the samples obtained, as is standard in standard SMC methods. This will improve estimation performance in practical settings where computational budget is important to consider.Comment: arXiv admin note: text overlap with arXiv:1303.3123 by other author
    corecore