506 research outputs found

    Recursive Monte Carlo filters: Algorithms and theoretical analysis

    Full text link
    Recursive Monte Carlo filters, also called particle filters, are a powerful tool to perform computations in general state space models. We discuss and compare the accept--reject version with the more common sampling importance resampling version of the algorithm. In particular, we show how auxiliary variable methods and stratification can be used in the accept--reject version, and we compare different resampling techniques. In a second part, we show laws of large numbers and a central limit theorem for these Monte Carlo filters by simple induction arguments that need only weak conditions. We also show that, under stronger conditions, the required sample size is independent of the length of the observed series.Comment: Published at http://dx.doi.org/10.1214/009053605000000426 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Comment: The 2005 Neyman Lecture: Dynamic Indeterminism in Science

    Full text link
    Comment on ``The 2005 Neyman Lecture: Dynamic Indeterminism in Science'' [arXiv:0808.0620]Comment: Published in at http://dx.doi.org/10.1214/07-STS246B the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bridging the ensemble Kalman and particle filter

    Full text link
    In many applications of Monte Carlo nonlinear filtering, the propagation step is computationally expensive, and hence, the sample size is limited. With small sample sizes, the update step becomes crucial. Particle filtering suffers from the well-known problem of sample degeneracy. Ensemble Kalman filtering avoids this, at the expense of treating non-Gaussian features of the forecast distribution incorrectly. Here we introduce a procedure which makes a continuous transition indexed by gamma in [0,1] between the ensemble and the particle filter update. We propose automatic choices of the parameter gamma such that the update stays as close as possible to the particle filter update subject to avoiding degeneracy. In various examples, we show that this procedure leads to updates which are able to handle non-Gaussian features of the prediction sample even in high-dimensional situations

    A dynamic nonstationary spatio-temporal model for short term prediction of precipitation

    Full text link
    Precipitation is a complex physical process that varies in space and time. Predictions and interpolations at unobserved times and/or locations help to solve important problems in many areas. In this paper, we present a hierarchical Bayesian model for spatio-temporal data and apply it to obtain short term predictions of rainfall. The model incorporates physical knowledge about the underlying processes that determine rainfall, such as advection, diffusion and convection. It is based on a temporal autoregressive convolution with spatially colored and temporally white innovations. By linking the advection parameter of the convolution kernel to an external wind vector, the model is temporally nonstationary. Further, it allows for nonseparable and anisotropic covariance structures. With the help of the Voronoi tessellation, we construct a natural parametrization, that is, space as well as time resolution consistent, for data lying on irregular grid points. In the application, the statistical model combines forecasts of three other meteorological variables obtained from a numerical weather prediction model with past precipitation observations. The model is then used to predict three-hourly precipitation over 24 hours. It performs better than a separable, stationary and isotropic version, and it performs comparably to a deterministic numerical weather prediction model for precipitation and has the advantage that it quantifies prediction uncertainty.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS564 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On hidden Markov chains and finite stochastic systems

    Full text link
    In this paper we study various properties of finite stochastic systems or hidden Markov chains as they are alternatively called. We discuss their construction following different approaches and we also derive recursive filtering formulas for the different systems that we consider. The key tool is a simple lemma on conditional expectations

    Robust Methods for Credibility

    Get PDF
    Excess claims lead to an unsatisfactory behavior of standard linear credibility estimators. We suggest in this paper to use robust methods in order to obtain better estimators. Our first proposal is the linear credibility estimator with the claims replaced by a robust M-estimator of scale calculed from the claims. This corresponds to a truncation of the claims with a truncation point depending on the data and different for each contract. We discuss the properties of the robust M-estimator and present several examples. In order to improve the performance for a very small number of years, we propose a second estimator, which incorporates information from other claims into the M-estimato

    Intrinsic autoregressions and related models on the two-dimensional lattice

    Get PDF
    Stationary autoregressions on a two-dimensional lattice are generalized to intrinsic models where only increments are assumed to be stationary. Prediction formulae and the asymptotic behaviour of the semivariogram are derived. For parameter estimation we propose an approximate maximum likelihood estimator, a generalization of Whittle's estimator; it is derived also for general intrinsic model

    Edge effects and efficient parameter estimation for stationary random fields

    Get PDF
    We consider the estimation of the parameters of a stationary random field on d-dimensional lattice by minimizing the classical Whittle approximation to the Gaussian log likelihood. If the usual biased sample covariances are used, the estimate is efficient only in one dimension. To remove this edge effect, we introduce data tapers and show that the resulting modified estimate is efficient also in two and three dimensions. This avoids the use of the unbiased sample covariances which are in general not positive-definit
    corecore