79 research outputs found

    Intra-hour cloud index forecasting with data assimilation

    Get PDF
    We introduce a computational framework to forecast cloud index (CI)fields for up to one hour on a spatial domain that covers a city. Such intra-hour CI forecasts are important to produce solar power forecasts of utility scale solar power and distributed rooftop solar. Our method combines a 2D advection model with cloud motion vectors (CMVs)derived from a mesoscale numerical weather prediction (NWP)model and sparse optical flow acting on successive, geostationary satellite images. We use ensemble data assimilation to combine these sources of cloud motion information based on the uncertainty of each data source. Our technique produces forecasts that have similar or lower root mean square error than reference techniques that use only optical flow, NWP CMV fields, or persistence. We describe how the method operates on three representative case studies and present results from 39 cloudy days

    MALA-within-Gibbs samplers for high-dimensional distributions with sparse conditional structure

    Get PDF
    Markov chain Monte Carlo (MCMC) samplers are numerical methods for drawing samples from a given target probability distribution. We discuss one particular MCMC sampler, the MALA-within-Gibbs sampler, from the theoretical and practical perspectives. We first show that the acceptance ratio and step size of this sampler are independent of the overall problem dimension when (i) the target distribution has sparse conditional structure, and (ii) this structure is reflected in the partial updating strategy of MALA-within-Gibbs. If, in addition, the target density is blockwise log-concave, then the sampler's convergence rate is independent of dimension. From a practical perspective, we expect that MALA-within-Gibbs is useful for solving high-dimensional Bayesian inference problems where the posterior exhibits sparse conditional structure at least approximately. In this context, a partitioning of the state that correctly reflects the sparse conditional structure must be found, and we illustrate this process in two numerical examples. We also discuss trade-offs between the block size used for partial updating and computational requirements that may increase with the number of blocks

    Performance bounds for particle filters using the optimal proposal

    Get PDF
    Particle filters may suffer from degeneracy of the particle weights. For the simplest "bootstrap" filter, it is known that avoiding degeneracy in large systems requires that the ensemble size must increase exponentially with the variance of the observation log-likelihood. The present article shows first that a similar result applies to particle filters using sequential importance sampling and the optimal proposal distribution and, second, that the optimal proposal yields minimal degeneracy when compared to any other proposal distribution that depends only on the previous state and the most recent observations. Thus, the optimal proposal provides performance bounds for filters using sequential importance sampling and any such proposal. An example with independent and identically distributed degrees of freedom illustrates both the need for exponentially large ensemble size with the optimal proposal as the system dimension increases and the potentially dramatic advantages of the optimal proposal relative to simpler proposals. Those advantages depend crucially on the magnitude of the system noise

    Localization for MCMC: sampling high-dimensional posterior distributions with local structure

    Get PDF
    We investigate how ideas from covariance localization in numerical weather prediction can be used in Markov chain Monte Carlo (MCMC) sampling of high-dimensional posterior distributions arising in Bayesian inverse problems. To localize an inverse problem is to enforce an anticipated "local" structure by (i) neglecting small off-diagonal elements of the prior precision and covariance matrices; and (ii) restricting the influence of observations to their neighborhood. For linear problems we can specify the conditions under which posterior moments of the localized problem are close to those of the original problem. We explain physical interpretations of our assumptions about local structure and discuss the notion of high dimensionality in local problems, which is different from the usual notion of high dimensionality in function space MCMC. The Gibbs sampler is a natural choice of MCMC algorithm for localized inverse problems and we demonstrate that its convergence rate is independent of dimension for localized linear problems. Nonlinear problems can also be tackled efficiently by localization and, as a simple illustration of these ideas, we present a localized Metropolis-within-Gibbs sampler. Several linear and nonlinear numerical examples illustrate localization in the context of MCMC samplers for inverse problems.Comment: 33 pages, 5 figure
    • …
    corecore