277,169 research outputs found

    Bayesian and fuzzy kriging interpolation techniques for spatial estimation in mining field

    Get PDF
    The focus of this research is in the area of spatial estimation. Such a study is very important in order to improve the spatial prediction performance. Many techniques of prediction that are based on the regionalized variables, and the surface trend change from linear to quadratic or cubic that produces inaccurate results in the prediction process. In this thesis, Bayesian and fuzzy kriging methods are suggested to solve the problem of uncertainty, which requires obtaining a minimum error in the prediction process. This study aims to improve the mixed approaches among methods of spatial prediction that are used for evaluation of prediction. The study also finds the performance of variation interpolation methods of minerals needed to develop the relationship between Bayesian techniques and fuzzy kriging and apply the results for further modeling a spatial relationship. This spatial prediction assumes stationary property. The findings of this study are mathematical models of covariance functions. The variogram and cross variogram functions are computed for all compass directions for the phenomena under the study and its parameters are estimated. Another aspect is to obtain Bayesian predictor, kriging predictor, and Bayesian kriging variance which represent the minimum variance of prediction. In addition, the constraints weights of linear prediction were computed. The practical side of this study includes the applications of the Bayesian and fuzzy kriging techniques on real spatial data with their locations in the mining fields of Australia, Canada, and Colombia. All the computations were carried out by using Matlab software. In conclusion, this study uses two different methods (Bayesian and fuzzy kriging techniques) for incorporating the spatial autocorrelation in order to improve the accuracy of uncertainty and estimation with minimum error. The approach combines more than one prediction methods to determine a model which is based on a cross validation that satisfies the best optimal prediction

    Composite Gaussian process models for emulating expensive functions

    Full text link
    A new type of nonstationary Gaussian process model is developed for approximating computationally expensive functions. The new model is a composite of two Gaussian processes, where the first one captures the smooth global trend and the second one models local details. The new predictor also incorporates a flexible variance model, which makes it more capable of approximating surfaces with varying volatility. Compared to the commonly used stationary Gaussian process model, the new predictor is numerically more stable and can more accurately approximate complex surfaces when the experimental design is sparse. In addition, the new model can also improve the prediction intervals by quantifying the change of local variability associated with the response. Advantages of the new predictor are demonstrated using several examples.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS570 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Untenable nonstationarity: An assessment of the fitness for purpose of trend tests in hydrology

    Get PDF
    The detection and attribution of long-term patterns in hydrological time series have been important research topics for decades. A significant portion of the literature regards such patterns as ‘deterministic components’ or ‘trends’ even though the complexity of hydrological systems does not allow easy deterministic explanations and attributions. Consequently, trend estimation techniques have been developed to make and justify statements about tendencies in the historical data, which are often used to predict future events. Testing trend hypothesis on observed time series is widespread in the hydro-meteorological literature mainly due to the interest in detecting consequences of human activities on the hydrological cycle. This analysis usually relies on the application of some null hypothesis significance tests (NHSTs) for slowly-varying and/or abrupt changes, such as Mann-Kendall, Pettitt, or similar, to summary statistics of hydrological time series (e.g., annual averages, maxima, minima, etc.). However, the reliability of this application has seldom been explored in detail. This paper discusses misuse, misinterpretation, and logical flaws of NHST for trends in the analysis of hydrological data from three different points of view: historic-logical, semantic-epistemological, and practical. Based on a review of NHST rationale, and basic statistical definitions of stationarity, nonstationarity, and ergodicity, we show that even if the empirical estimation of trends in hydrological time series is always feasible from a numerical point of view, it is uninformative and does not allow the inference of nonstationarity without assuming a priori additional information on the underlying stochastic process, according to deductive reasoning. This prevents the use of trend NHST outcomes to support nonstationary frequency analysis and modeling. We also show that the correlation structures characterizing hydrological time series might easily be underestimated, further compromising the attempt to draw conclusions about trends spanning the period of records. Moreover, even though adjusting procedures accounting for correlation have been developed, some of them are insufficient or are applied only to some tests, while some others are theoretically flawed but still widely applied. In particular, using 250 unimpacted stream flow time series across the conterminous United States (CONUS), we show that the test results can dramatically change if the sequences of annual values are reproduced starting from daily stream flow records, whose larger sizes enable a more reliable assessment of the correlation structures
    corecore