34,972 research outputs found

    Particle approximations of the score and observed information matrix for parameter estimation in state space models with linear computational cost

    Full text link
    Poyiadjis et al. (2011) show how particle methods can be used to estimate both the score and the observed information matrix for state space models. These methods either suffer from a computational cost that is quadratic in the number of particles, or produce estimates whose variance increases quadratically with the amount of data. This paper introduces an alternative approach for estimating these terms at a computational cost that is linear in the number of particles. The method is derived using a combination of kernel density estimation, to avoid the particle degeneracy that causes the quadratically increasing variance, and Rao-Blackwellisation. Crucially, we show the method is robust to the choice of bandwidth within the kernel density estimation, as it has good asymptotic properties regardless of this choice. Our estimates of the score and observed information matrix can be used within both online and batch procedures for estimating parameters for state space models. Empirical results show improved parameter estimates compared to existing methods at a significantly reduced computational cost. Supplementary materials including code are available.Comment: Accepted to Journal of Computational and Graphical Statistic

    A particle filtering approach for joint detection/estimation of multipath effects on GPS measurements

    Get PDF
    Multipath propagation causes major impairments to Global Positioning System (GPS) based navigation. Multipath results in biased GPS measurements, hence inaccurate position estimates. In this work, multipath effects are considered as abrupt changes affecting the navigation system. A multiple model formulation is proposed whereby the changes are represented by a discrete valued process. The detection of the errors induced by multipath is handled by a Rao-Blackwellized particle filter (RBPF). The RBPF estimates the indicator process jointly with the navigation states and multipath biases. The interest of this approach is its ability to integrate a priori constraints about the propagation environment. The detection is improved by using information from near future GPS measurements at the particle filter (PF) sampling step. A computationally modest delayed sampling is developed, which is based on a minimal duration assumption for multipath effects. Finally, the standard PF resampling stage is modified to include an hypothesis test based decision step

    Approximate Bayesian Computation for a Class of Time Series Models

    Full text link
    In the following article we consider approximate Bayesian computation (ABC) for certain classes of time series models. In particular, we focus upon scenarios where the likelihoods of the observations and parameter are intractable, by which we mean that one cannot evaluate the likelihood even up-to a positive unbiased estimate. This paper reviews and develops a class of approximation procedures based upon the idea of ABC, but, specifically maintains the probabilistic structure of the original statistical model. This idea is useful, in that it can facilitate an analysis of the bias of the approximation and the adaptation of established computational methods for parameter inference. Several existing results in the literature are surveyed and novel developments with regards to computation are given

    Methods Studies on System Identification from Transient Rotor Tests

    Get PDF
    Some of the more important methods are discussed that have been used or proposed for aircraft parameter identification. The methods are classified into two groups: Equation error or regression estimates and Bayesian estimates and their derivatives that are based on probabilistic concepts. In both of these two groups the cost function can be optimized either globally over the entire time span of the transient, or sequentially, leading to the formulation of optimum filters. Identifiability problems and the validation of the estimates are briefly outlined, and applications to lifting rotors are discussed

    Marginal likelihoods in phylogenetics: a review of methods and applications

    Full text link
    By providing a framework of accounting for the shared ancestry inherent to all life, phylogenetics is becoming the statistical foundation of biology. The importance of model choice continues to grow as phylogenetic models continue to increase in complexity to better capture micro and macroevolutionary processes. In a Bayesian framework, the marginal likelihood is how data update our prior beliefs about models, which gives us an intuitive measure of comparing model fit that is grounded in probability theory. Given the rapid increase in the number and complexity of phylogenetic models, methods for approximating marginal likelihoods are increasingly important. Here we try to provide an intuitive description of marginal likelihoods and why they are important in Bayesian model testing. We also categorize and review methods for estimating marginal likelihoods of phylogenetic models, highlighting several recent methods that provide well-behaved estimates. Furthermore, we review some empirical studies that demonstrate how marginal likelihoods can be used to learn about models of evolution from biological data. We discuss promising alternatives that can complement marginal likelihoods for Bayesian model choice, including posterior-predictive methods. Using simulations, we find one alternative method based on approximate-Bayesian computation (ABC) to be biased. We conclude by discussing the challenges of Bayesian model choice and future directions that promise to improve the approximation of marginal likelihoods and Bayesian phylogenetics as a whole.Comment: 33 pages, 3 figure

    Resampling: an improvement of Importance Sampling in varying population size models

    Get PDF
    Sequential importance sampling algorithms have been defined to estimate likelihoods in models of ancestral population processes. However, these algorithms are based on features of the models with constant population size, and become inefficient when the population size varies in time, making likelihood-based inferences difficult in many demographic situations. In this work, we modify a previous sequential importance sampling algorithm to improve the efficiency of the likelihood estimation. Our procedure is still based on features of the model with constant size, but uses a resampling technique with a new resampling probability distribution depending on the pairwise composite likelihood. We tested our algorithm, called sequential importance sampling with resampling (SISR) on simulated data sets under different demographic cases. In most cases, we divided the computational cost by two for the same accuracy of inference, in some cases even by one hundred. This study provides the first assessment of the impact of such resampling techniques on parameter inference using sequential importance sampling, and extends the range of situations where likelihood inferences can be easily performed
    corecore