587 research outputs found
Fractal relationships and spatial distribution of ore body modelling
The nature of spatial distributions of geological variables such as ore grades is of primary concern when modelling ore bodies and mineral resources. The aim of any mineral resource evaluation process is to determine the location, extent, volume and average grade of that resource by a trade off between maximum confidence in the results and minimum sampling effort. The principal aim of almost every geostatistical modelling process is to predict the spatial variation of one or more geological variables in order to estimate values of those variables at locations that have not been sampled. From the spatial analysis of these variables, in conjunction with the physical geology of the region of interest, the location, extent and volume, or series of discrete volumes, whose average ore grade exceeds a specific ore grade cut off value determined\u27 by economic parameters can be determined, Of interest are not only the volume and average grade of the material but also the degree of uncertainty associated with each of these. Geostatistics currently provides many methods of assessing spatial variability. Fractal dimensions also give us a measure of spatial variability and have been found to model many natural phenomenon successfully (Mandelbrot 1983, Burrough 1981), but until now fractal modelling techniques have not been able to match the versatility and accuracy of geostatistical methods. Fractal ideas and use of the fractal dimension may in certain cases provide a better understanding of the way in which spatial variability manifests itself in geostatistical situations. This research will propose and investigate a new application of fractal simulation methods to spatial variability and spatial interpolation techniques as they relate to ore body modelling. The results show some advantages over existing techniques of geostatistical simulation
A fuzzy hybrid sequential design strategy for global surrogate modeling of high-dimensional computer experiments
Complex real-world systems can accurately be modeled by simulations. Evaluating high-fidelity simulators can take several days, making them impractical for use in optimization, design space exploration, and analysis. Often, these simulators are approximated by relatively simple math known as a surrogate model. The data points to construct this model are simulator evaluations meaning the choice of these points is crucial: each additional data point can be very expensive in terms of computing time. Sequential design strategies offer a huge advantage over one-shot experimental design because information gathered from previous data points can be used in the process of determining new data points. Previously, LOLA-Voronoi was presented as a hybrid sequential design method which balances exploration and exploitation: the former involves selecting data points in unexplored regions of the design space, while the latter suggests adding data points in interesting regions which were previously discovered. Although this approach is very successful in terms of the required number of data points to build an accurate surrogate model, it is computationally intensive. This paper presents a new approach to the exploitation component of the algorithm based on fuzzy logic. The new approach has the same desirable properties as the old method but is less complex, especially when applied to high-dimensional problems. Experiments on several test problems show the new approach is a lot faster, without losing robustness or requiring additional samples to obtain similar model accuracy
Recommended from our members
Spatial snow water equivalent estimation for mountainous areas using wireless-sensor networks and remote-sensing products
We developed an approach to estimate snow water equivalent (SWE) through interpolation of spatially representative point measurements using a k-nearest neighbors (k-NN) algorithm and historical spatial SWE data. It accurately reproduced measured SWE, using different data sources for training and evaluation. In the central-Sierra American River basin, we used a k-NN algorithm to interpolate data from continuous snow-depth measurements in 10 sensor clusters by fusing them with 14 years of daily 500-m resolution SWE-reconstruction maps. Accurate SWE estimation over the melt season shows the potential for providing daily, near real-time distributed snowmelt estimates. Further south, in the Merced-Tuolumne basins, we evaluated the potential of k-NN approach to improve real-time SWE estimates. Lacking dense ground-measurement networks, we simulated k-NN interpolation of sensor data using selected pixels of a bi-weekly Lidar-derived snow water equivalent product. k-NN extrapolations underestimate the Lidar-derived SWE, with a maximum bias of −10 cm at elevations below 3000 m and +15 cm above 3000 m. This bias was reduced by using a Gaussian-process regression model to spatially distribute residuals. Using as few as 10 scenes of Lidar-derived SWE from 2014 as training data in the k-NN to estimate the 2016 spatial SWE, both RMSEs and MAEs were reduced from around 20–25 cm to 10–15 cm comparing to using SWE reconstructions as training data. We found that the spatial accuracy of the historical data is more important for learning the spatial distribution of SWE than the number of historical scenes available. Blending continuous spatially representative ground-based sensors with a historical library of SWE reconstructions over the same basin can provide real-time spatial SWE maps that accurately represents Lidar-measured snow depth; and the estimates can be improved by using historical Lidar scans instead of SWE reconstructions
Recommended from our members
MODEL-BASED PREDICTIVE ANALYTICS FOR ADDITIVE AND SMART MANUFACTURING
Qualification and certification for additive and smart manufacturing systems can be uncertain and very costly. Using available historical data can mitigate some costs of producing and testing sample parts. However, use of such data lacks the flexibility to represent specific new problems which decreases predictive accuracy and efficiency. To address these compelling needs, in this dissertation modeling techniques are introduced that can proactively estimate results expected from additive and smart manufacturing processes swiftly and with practical levels of accuracy and reliability. More specifically, this research addresses the current challenges and limitations posed by use of available data and the high costs of new data by tailoring statistics-based metamodeling techniques to enable affordable prediction of these systems.
The result is an integrated approach to customize and build predictive metamodels for the unique features of additive and smart manufacturing systems. This integrated approach is composed of five main parts that cover the broad spectrum of requirements. A domain-driven metamodeling approach uses physics-based knowledge to optimally select the most appropriate metamodeling algorithm without reliance upon statistical data. A maximum predictive error updating method iteratively improves predictability from a given dataset. A grey-box metamodeling approach combines statistics-based black-box and physics-based white-box models to significantly increase predictive accuracy with less expensive data overall. To improve computational efficiency for large datasets, a dynamic metamodeling method modifies the traditional Kriging technique to improve its efficiency and predictability for smart manufacturing systems. Finally, a super-metamodeling method optimizes results regardless of problem conditions by avoiding the challenge with selecting the most appropriate metamodeling algorithm.
To realize the benefits of all five approaches, an integrated metamodeling process was developed and implemented into a tool package to systematically select the suitable algorithm, sampling method, and combination of models. All the functions of this tool package were validated and demonstrated by the use of two empirical datasets from additive manufacturing processes
- …