22 research outputs found

    A generalization of the adaptive rejection sampling algorithm

    Get PDF
    The original publication is available at www.springerlink.comRejection sampling is a well-known method to generate random samples from arbitrary target probability distributions. It demands the design of a suitable proposal probability density function (pdf) from which candidate samples can be drawn. These samples are either accepted or rejected depending on a test involving the ratio of the target and proposal densities. The adaptive rejection sampling method is an efficient algorithm to sample from a log-concave target density, that attains high acceptance rates by improving the proposal density whenever a sample is rejected. In this paper we introduce a generalized adaptive rejection sampling procedure that can be applied with a broad class of target probability distributions, possibly non-log-concave and exhibiting multiple modes. The proposed technique yields a sequence of proposal densities that converge toward the target pdf, thus achieving very high acceptance rates. We provide a simple numerical example to illustrate the basic use of the proposed technique, together with a more elaborate positioning application using real data.This work has been partially supported by the Ministry of Science and Innovation of Spain (project MONIN, ref. TEC-2006-13514-C02-01/TCM, project DEIPRO, ref. TEC-2009- 14504-C02-01 and program Consolider-Ingenio 2010 CSD2008- 00010 COMONSENS) and the Autonomous Community of Madrid (project PROMULTIDIS-CM, ref. S-0505/TIC/0233).Publicad

    Uniform convergence over time of a nested particle filtering scheme for recursive parameter estimation in state-space Markov models

    Get PDF
    Documento depositado en el repositorio arXiv.org. Versión: arXiv:1603.09005v1 [stat.CO]We analyse the performance of a recursive Monte Carlo method for the Bayesian estimation of the static parameters of a discrete-time state-space Markov model. The algorithm employs two layers of particle filters to approximate the posterior probability distribution of the model parameters. In particular, the first layer yields an empirical distribution of samples on the parameter space, while the filters in the second layer are auxiliary devices to approximate the (analytically intractable) likelihood of the parameters. This approach relates the this algorithm to the recent sequential Monte Carlo square (SMC2) method, which provides a {\em non-recursive} solution to the same problem. In this paper, we investigate the approximation, via the proposed scheme, of integrals of real bounded functions with respect to the posterior distribution of the system parameters. Under assumptions related to the compactness of the parameter support and the stability and continuity of the sequence of posterior distributions for the state-space model, we prove that the Lp norms of the approximation errors vanish asymptotically (as the number of Monte Carlo samples generated by the algorithm increases) and uniformly over time. We also prove that, under the same assumptions, the proposed scheme can asymptotically identify the parameter values for a class of models. We conclude the paper with a numerical example that illustrates the uniform convergence results by exploring the accuracy and stability of the proposed algorithm operating with long sequences of observations.The work of J. Míguez was partially supported by Ministerio de Economía y Competitividad of Spain (project TEC2012-38883-C02-01 COMPREHENSION) and the Office of Naval Research Global (award no. N62909-15-1-2011). Part of this work was carried out while J. M. was a visitor at the Department of Mathematics of Imperial College London, with partial support from an EPSRC Mathematics Platform grant. D. C. and J. M. would also like to acknowledge the support of the Isaac Newton Institute through the program “Monte Carlo Inference for High-Dimensional Statistical Models”

    Nested particle filters for online parameter estimation in discrete-time state-space Markov models

    Get PDF
    Documento depositado en el repositorio arXiv.org. Versión: arXiv:1308.1883v5 [stat.CO]We address the problem of approximating the posterior probability distribution of the fixed parameters of a state-space dynamical system using a sequential Monte Carlo method. The proposed approach relies on a nested structure that employs two layers of particle filters to approximate the posterior probability measure of the static parameters and the dynamic state variables of the system of interest, in a vein similar to the recent "sequential Monte Carlo square" (SMC2) algorithm. However, unlike the SMC2 scheme, the proposed technique operates in a purely recursive manner. In particular, the computational complexity of the recursive steps of the method introduced herein is constant over time. We analyse the approximation of integrals of real bounded functions with respect to the posterior distribution of the system parameters computed via the proposed scheme. As a result, we prove, under regularity assumptions, that the approximation errors vanish asymptotically in Lp (p≥1) with convergence rate proportional to 1N√+1M√, where N is the number of Monte Carlo samples in the parameter space and N×M is the number of samples in the state space. This result also holds for the approximation of the joint posterior distribution of the parameters and the state variables. We discuss the relationship between the SMC2 algorithm and the new recursive method and present a simple example in order to illustrate some of the theoretical findings with computer simulations.The work of the D. Crisan has been partially supported by the EPSRC grant no EP/N023781/1. The work of J. Míguez was partially supported by t he Office of Naval Research Global (award no. N62909- 15-1-2011), Ministerio de Economía y Competitividad of Spain (project TEC2015-69868-C2-1-R ADVENTURE) and Ministerio de Educación, Cultura y Deporte of Spain (Programa Nacional de Movilidad de Recursos Humanos PRX12/00690). Part of this work was carried out while J. M. was a visitor at the Depar tment of Mathematics of Imperial College London, with partial support fr om an EPSRC Mathematics Platform grant. D. C. and J. M. would also like to acknow ledge the support of the Isaac Newton Institute through the program “Monte Carlo Inference for High-Dimensional Statistical Models”, as well as the constructive comme nts of an anonymous Reviewer, who helped improving the final version of this manuscrip

    Convergence rates for optimised adaptive importance samplers

    Get PDF
    Adaptive importance samplers are adaptive Monte Carlo algorithms to estimate expectations with respect to some target distribution which adapt themselves to obtain better estimators over a sequence of iterations. Although it is straightforward to show that they have the same O(1/N−−√) convergence rate as standard importance samplers, where N is the number of Monte Carlo samples, the behaviour of adaptive importance samplers over the number of iterations has been left relatively unexplored. In this work, we investigate an adaptation strategy based on convex optimisation which leads to a class of adaptive importance samplers termed optimised adaptive importance samplers (OAIS). These samplers rely on the iterative minimisation of the χ2-divergence between an exponential family proposal and the target. The analysed algorithms are closely related to the class of adaptive importance samplers which minimise the variance of the weight function. We first prove non-asymptotic error bounds for the mean squared errors (MSEs) of these algorithms, which explicitly depend on the number of iterations and the number of samples together. The non-asymptotic bounds derived in this paper imply that when the target belongs to the exponential family, the L2 errors of the optimised samplers converge to the optimal rate of O(1/N−−√) and the rate of convergence in the number of iterations are explicitly provided. When the target does not belong to the exponential family, the rate of convergence is the same but the asymptotic L2 error increases by a factor ρ⋆−−√>1, where ρ⋆−1 is the minimum χ2-divergence between the target and an exponential family proposal.This work was supported by The Alan Turing Institute for Data Science and AI under EPSRC Grant EP/N510129/1. J.M. acknowledges the support of the Spanish Agencia Estatal de Investigación (awards TEC2015-69868-C2-1-R ADVENTURE and RTI2018-099655-B-I00 CLARA) and the Office of Naval Research (Award No. N00014-19-1-2226)

    Nudging the particle filter

    Get PDF
    Documento depositado en el repositorio arXiv.org. Versión: arXiv:1708.07801v2 [stat.CO]We investigate a new sampling scheme to improve the performance of particle filters in scenarios where either (a) there is a significant mismatch between the assumed model dynamics and the actual system producing the available observations, or (b) the system of interest is high dimensional and the posterior probability tends to concentrate in relatively small regions of the state space. The proposed scheme generates nudged particles, i.e., subsets of particles which are deterministically pushed towards specific areas of the state space where the likelihood is expected to be high, an operation known as nudging in the geophysics literature. This is a device that can be plugged into any particle filtering scheme, as it does not involve modifications in the classical algorithmic steps of sampling, computation of weights, and resampling. Since the particles are modified, but the importance weights do not account for this modification, the use of nudging leads to additional bias in the resulting estimators. However, we prove analytically that particle filters equipped with the proposed device still attain asymptotic convergence (with the same error rates as conventional particle methods) as long as the nudged particles are generated according to simple and easy-to-implement rules. Finally, we show numerical results that illustrate the improvement in performance and robustness that can be attained using the proposed scheme. In particular, we show the results of computer experiments involving misspecified Lorenz 63 model, object tracking with misspecified models, and a large dimensional Lorenz 96 chaotic model. For the examples we have investigated, the new particle filter outperforms conventional algorithms empirically, while it has only negligible computational overhead.This work was partially supported by Ministerio de Economía y Competitividad of Spain (TEC2015-69868-C2-1-R ADVENTURE), the Office of Naval Research Global (N62909-15-1-2011), and the regional government of Madrid (program CA SICAM-CM S2013/ICE-2845

    A comparison of nonlinear population Monte Carlo and particle Markov chain Monte Carlo algorithms for Bayesian inference in stochastic kinetic models

    Get PDF
    Documento depositado en el repositorio arXiv.org. Versión: arXiv:1404.5218v1 [stat.ME]In this paper we address the problem of Monte Carlo approximation of posterior probability distributions in stochastic kinetic models (SKMs). SKMs are multivariate Markov jump processes that model the interactions among species in biochemical systems according to a set of uncertain parameters. Markov chain Monte Carlo (MCMC) methods have been typically preferred for this Bayesian inference problem. Specifically, the particle MCMC (pMCMC) method has been recently shown to be an effective, while computationally demanding, method applicable to this problem. Within the pMCMC framework, importance sampling (IS) has been used only as the basis of the sequential Monte Carlo (SMC) approximation of the acceptance ratio in the Metropolis-Hastings kernel. However, the recently proposed nonlinear population Monte Carlo (NPMC) algorithm, based on an iterative IS scheme, has also been shown to be effective as a Bayesian inference tool for low dimensional (predator-prey) SKMs. In this paper, we provide an extensive performance comparison of pMCMC versus NPMC, when applied to the challenging prokaryotic autoregulatory network. We show how the NPMC method can greatly outperform the pMCMC algorithm in this scenario, with an overall moderate computational effort. We complement the numerical comparison of the two techniques with an asymptotic convergence analysis of the nonlinear IS scheme at the core of the proposed method when the importance weights can only be computed approximatelyE. K. acknowledges the support of Ministerio de Educacióon of Spain ( Programa de Formación de Profesorado Universitario , ref. AP2008-00469). This work has been partially supported by Ministerio de Economía y Competitividad of Spain (program Consolider-Ingenio 2010 CSD2008-00010 COMONSENS and project COMPREHENSION TEC2012-38883-C02-01)

    Importance sampling with transformed weights

    Get PDF
    The importance sampling (IS) method lies at the core of many Monte Carlo-based techniques. IS allows the approximation of a target probability distribution by drawing samples from a proposal (or importance) distribution, different from the target, and computing importance weights (IWs) that account for the discrepancy between these two distributions. The main drawback of IS schemes is the degeneracy of the IWs, which significantly reduces the efficiency of the method. It has been recently proposed to use transformed IWs (TIWs) to alleviate the degeneracy problem in the context of population Monte Carlo, which is an iterative version of IS. However, the effectiveness of this technique for standard IS is yet to be investigated. The performance of IS when using TIWs is numerically assessed, and showed that the method can attain robustness to weight degeneracy thanks to a bias/variance trade-off.This work was supported by the Ministerio de Economía y Competitividad of Spain (projects TEC2012-38883-C02-01 and TEC2015-69868-C2-1-R) and the Office of Naval Research Global (award no. N62909-15-1-2011

    A proof of uniform convergence over time for a distributed particle filter

    Get PDF
    Distributed signal processing algorithms have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters (PFs). However, most distributed PFs involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard PFs do not hold for their distributed counterparts. In this paper, we analyze a distributed PF based on the non-proportional weight-allocation scheme of Bolic et al (2005) and prove rigorously that, under certain stability assumptions, its asymptotic convergence is guaranteed uniformly over time, in such a way that approximation errors can be kept bounded with a fixed computational budget. To illustrate the theoretical findings, we carry out computer simulations for a target tracking problem. The numerical results show that the distributed PF has a negligible performance loss (compared to a centralized filter) for this problem and enable us to empirically validate the key assumptions of the analysis.This work was supported by Ministerio de Economia y Competitividad of Spain (project COMPREHENSION TEC2012 38883 C02 01), Comunidad de Madrid (project CASI CAM CM S2013/ICE 2845) and the Office of Naval Research Global (award no. N62909 15 1 2011

    On the use of the channel second-order statistics in MMSE receivers for time- and frequency-selective MIMO transmission systems

    Get PDF
    Equalization of unknown frequency- and time-selective multiple input multiple output (MIMO) channels is often carried out by means of decision feedback receivers. These consist of a channel estimator and a linear filter (for the estimation of the transmitted symbols), interconnected by a feedback loop through a symbol-wise threshold detector. The linear filter is often a minimum mean square error (MMSE) filter, and its mathematical expression involves second-order statistics (SOS) of the channel, which are usually ignored by simply assuming that the channel is a known (deterministic) parameter given by an estimate thereof. This appears to be suboptimal and in this work we investigate the kind of performance gains that can be expected when the MMSE equalizer is obtained using SOS of the channel process. As a result, we demonstrate that improvements of several dBs in the signal-to-noise ratio needed to achieve a prescribed symbol error rate are possible

    On the Generalized Ratio of Uniforms as a Combination of Transformed Rejection and Extended Inverse of Density Sampling

    Get PDF
    Documento depositado en el repositorio arXiv.org. Versión: arXiv:1205.0482v6 [stat.CO]In this work we investigate the relationship among three classical sampling techniques: the inverse of density (Khintchine's theorem), the transformed rejection (TR) and the generalized ratio of uniforms (GRoU). Given a monotonic probability density function (PDF), we show that the transformed area obtained using the generalized ratio of uniforms method can be found equivalently by applying the transformed rejection sampling approach to the inverse function of the target density. Then we provide an extension of the classical inverse of density idea, showing that it is completely equivalent to the GRoU method for monotonic densities. Although we concentrate on monotonic probability density functions (PDFs), we also discuss how the results presented here can be extended to any non-monotonic PDF that can be decomposed into a collection of intervals where it is monotonically increasing or decreasing. In this general case, we show the connections with transformations of certain random variables and the generalized inverse PDF with the GRoU technique. Finally, we also introduce a GRoU technique to handle unbounded target densities
    corecore