11,367 research outputs found

    Cutset Sampling for Bayesian Networks

    Full text link
    The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structure-exploiting application of the Rao-Blackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memory-based inference algorithms. It can also be viewed as an anytime approximation of the exact cutset-conditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loop-cutset of the Bayesian network and, more generally, when the induced width of the networks graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks

    First and second order semi-Markov chains for wind speed modeling

    Full text link
    The increasing interest in renewable energy, particularly in wind, has given rise to the necessity of accurate models for the generation of good synthetic wind speed data. Markov chains are often used with this purpose but better models are needed to reproduce the statistical properties of wind speed data. We downloaded a database, freely available from the web, in which are included wind speed data taken from L.S.I. -Lastem station (Italy) and sampled every 10 minutes. With the aim of reproducing the statistical properties of this data we propose the use of three semi-Markov models. We generate synthetic time series for wind speed by means of Monte Carlo simulations. The time lagged autocorrelation is then used to compare statistical properties of the proposed models with those of real data and also with a synthetic time series generated though a simple Markov chain.Comment: accepted for publication on Physica

    Path storage in the particle filter

    Full text link
    This article considers the problem of storing the paths generated by a particle filter and more generally by a sequential Monte Carlo algorithm. It provides a theoretical result bounding the expected memory cost by T+CNlog⁥NT + C N \log N where TT is the time horizon, NN is the number of particles and CC is a constant, as well as an efficient algorithm to realise this. The theoretical result and the algorithm are illustrated with numerical experiments.Comment: 9 pages, 5 figures. To appear in Statistics and Computin
    • 

    corecore