538 research outputs found

    Loop corrections in spin models through density consistency

    Get PDF
    Computing marginal distributions of discrete or semidiscrete Markov random fields (MRFs) is a fundamental, generally intractable problem with a vast number of applications in virtually all fields of science. We present a new family of computational schemes to approximately calculate the marginals of discrete MRFs. This method shares some desirable properties with belief propagation, in particular, providing exact marginals on acyclic graphs, but it differs with the latter in that it includes some loop corrections; i.e., it takes into account correlations coming from all cycles in the factor graph. It is also similar to the adaptive Thouless-Anderson-Palmer method, but it differs with the latter in that the consistency is not on the first two moments of the distribution but rather on the value of its density on a subset of values. The results on finite-dimensional Isinglike models show a significant improvement with respect to the Bethe-Peierls (tree) approximation in all cases and with respect to the plaquette cluster variational method approximation in many cases. In particular, for the critical inverse temperature βc\beta_{c} of the homogeneous hypercubic lattice, the expansion of (dβc)−1\left(d\beta_{c}\right)^{-1} around d=∞d=\infty of the proposed scheme is exact up to the d−4d^{-4} order, whereas the two latter are exact only up to the d−2d^{-2} order.Comment: 12 pages, 3 figures, 1 tabl

    Reservoir characterization using seismic inversion data

    Get PDF
    Reservoir architecture may be inferred from analogs and geologic concepts, seismic surveys, and well data. Stochastically inverted seismic data are uninformative about meter-scale features, but aid downscaling by constraining coarse-scale interval properties such as total thickness and average porosity. Well data reveal detailed facies and vertical trends (and may indicate lateral trends), but cannot specify intrawell stratal geometry. Consistent geomodels can be generated for flow simulation by systematically considering the precision and density of different data. Because seismic inversion, conceptual stacking, and lateral variability of the facies are uncertain, stochastic ensembles of geomodels are needed to capture variability. In this research, geomodels integrate stochastic seismic inversions. At each trace, constraints represent means and variances for the inexact constraint algorithms, or can be posed as exact constraints. These models also include stratigraphy (a stacking framework from prior geomodels), well data (core and wireline logs to constrain meter-scale structure at the wells), and geostatistics (for correlated variability). These elements are combined in a Bayesian framework. This geomodeling process creates prior models with plausible bedding geometries and facies successions. These prior models of stacking are updated, using well and seismic data to generate the posterior model. Markov Chain Monte Carlo methods sample the posteriors. Plausible subseismic features are introduced into flow models, whilst avoiding overtuning to seismic data or conceptual geologic models. Fully integrated cornerpoint flow models are created, and methods for screening and simulation studies are discussed. The updating constraints on total thickness and average porosity need not be from a seismic survey: any spatially dense estimates of these properties may be used

    Approximations to ruin probability in the presence of an absorbing upper barrier

    Get PDF
    Abstract unavailable please refer to PD

    Estimating Cropland Use in a Multi-County Region of the Southeastern United States

    Get PDF
    In this thesis, a model to analyze land use in a multi-county region of the Southeastern United States is presented. Farmer planting decisions are assumed to follow a non-stationary first order Markov decision process. The non-stationary transition probabilities are estimated as a function of the prior year‟s land usage and a set of exogenous variables using annual county level data from 1981 to 2005 using the maximum entropy method suggested by Golan et al. (1996). The transition probabilities are applied to each county‟s prior period crop production to estimate crop production in the current period. The model is graphically validated. A discussion is included on difficulties encountered in estimation of the model. Acreage elasticities are estimated and used to analyze the marginal effects of the explanatory variables on crop land use

    Solving the chemical master equation using sliding windows

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species.</p> <p>Results</p> <p>In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy.</p> <p>Conclusions</p> <p>The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori.</p

    Accelerating Bayesian computation in imaging

    Get PDF
    The dimensionality and ill-posedness often encountered in imaging inverse problems are a challenge for Bayesian computational methods, particularly for state-of-the-art sampling alternatives based on the Euler-Maruyama discretisation of the Langevin diffusion process. In this thesis, we address this difficulty and propose alternatives to accelerate Bayesian computation in imaging inverse problems, focusing on its computational aspects. We introduce, as our first contribution, a highly efficient proximal Markov chain Monte Carlo (MCMC) methodology, based on a state-of-the-art approximation known as the proximal stochastic orthogonal Runge-Kutta-Chebyshev (SK-ROCK) method. It has the advantage of cleverly combining multiple gradient evaluations to significantly speed up convergence, similar to accelerated gradient optimisation techniques. We rigorously demonstrate the acceleration of the Markov chains in the 2-Wasserstein distance for Gaussian models as a function of the condition number κ. In our second contribution, we propose a more sophisticated MCMC sampler, based on the careful integration of two advanced proximal Langevin MCMC methods, SK-ROCK and split Gibbs sampling (SGS), each of which uses a unique approach to accelerate convergence. More precisely, we show how to integrate the proximal SK-ROCK sampler with the model augmentation and relaxation method used by SGS at the level of the Langevin diffusion process, to speed up Bayesian computation at the expense of asymptotic bias. This leads to a new, faster proximal SK-ROCK sampler that combines the accelerated quality of the original sampler with the computational advantages of augmentation and relaxation. Additionally, we propose the augmented and relaxed model to be considered a generalisation of the target model rather than an approximation that situates relaxation in a bias-variance trade-off. As a result, we can carefully calibrate the amount of relaxation to boost both model accuracy (as determined by model evidence) and sampler convergence speed. To achieve this, we derive an empirical Bayesian method that automatically estimates the appropriate level of relaxation via maximum marginal likelihood estimation. The proposed methodologies are demonstrated in several numerical experiments related to image deblurring, hyperspectral unmixing, tomographic reconstruction and inpainting. Comparisons with Euler-type proximal Monte Carlo approaches confirm that the Markov chains generated with our methods exhibit significantly faster convergence speeds, achieve larger effective sample sizes, and produce lower mean square estimation errors with the same computational budget
    • …
    corecore