182,876 research outputs found
A new approach to spatial data interpolation using higher-order statistics
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures
Minimum Conditional Description Length Estimation for Markov Random Fields
In this paper we discuss a method, which we call Minimum Conditional
Description Length (MCDL), for estimating the parameters of a subset of sites
within a Markov random field. We assume that the edges are known for the entire
graph . Then, for a subset , we estimate the parameters
for nodes and edges in as well as for edges incident to a node in , by
finding the exponential parameter for that subset that yields the best
compression conditioned on the values on the boundary . Our
estimate is derived from a temporally stationary sequence of observations on
the set . We discuss how this method can also be applied to estimate a
spatially invariant parameter from a single configuration, and in so doing,
derive the Maximum Pseudo-Likelihood (MPL) estimate.Comment: Information Theory and Applications (ITA) workshop, February 201
Efficient Monte Carlo for high excursions of Gaussian random fields
Our focus is on the design and analysis of efficient Monte Carlo methods for
computing tail probabilities for the suprema of Gaussian random fields, along
with conditional expectations of functionals of the fields given the existence
of excursions above high levels, b. Na\"{i}ve Monte Carlo takes an exponential,
in b, computational cost to estimate these probabilities and conditional
expectations for a prescribed relative accuracy. In contrast, our Monte Carlo
procedures achieve, at worst, polynomial complexity in b, assuming only that
the mean and covariance functions are H\"{o}lder continuous. We also explain
how to fine tune the construction of our procedures in the presence of
additional regularity, such as homogeneity and smoothness, in order to further
improve the efficiency.Comment: Published in at http://dx.doi.org/10.1214/11-AAP792 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …