276 research outputs found

    Predictive Performance Of Machine Learning Algorithms For Ore Reserve Estimation In Sparse And Imprecise Data

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2006Traditional geostatistical estimation techniques have been used predominantly in the mining industry for the purpose of ore reserve estimation. Determination of mineral reserve has always posed considerable challenge to mining engineers due to geological complexities that are generally associated with the phenomenon of ore body formation. Considerable research over the years has resulted in the development of a number of state-of-the-art methods for the task of predictive spatial mapping such as ore reserve estimation. Recent advances in the use of the machine learning algorithms (MLA) have provided a new approach to solve the age-old problem. Therefore, this thesis is focused on the use of two MLA, viz. the neural network (NN) and support vector machine (SVM), for the purpose of ore reserve estimation. Application of the MLA have been elaborated with two complex drill hole datasets. The first dataset is a placer gold drill hole data characterized by high degree of spatial variability, sparseness and noise while the second dataset is obtained from a continuous lode deposit. The application and success of the models developed using these MLA for the purpose of ore reserve estimation depends to a large extent on the data subsets on which they are trained and subsequently on the selection of the appropriate model parameters. The model data subsets obtained by random data division are not desirable in sparse data conditions as it usually results in statistically dissimilar subsets, thereby reducing their applicability. Therefore, an ideal technique for data subdivision has been suggested in the thesis. Additionally, issues pertaining to the optimum model development have also been discussed. To investigate the accuracy and the applicability of the MLA for ore reserve estimation, their generalization ability was compared with the geostatistical ordinary kriging (OK) method. The analysis of Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Error (ME) and the coefficient of determination (R2) as the indices of the model performance indicated that they may significantly improve the predictive ability and thereby reduce the inherent risk in ore reserve estimation

    Optimisation of Mobile Communication Networks - OMCO NET

    Get PDF
    The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University. The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing

    Insect phenology: a geographical perspective

    Get PDF

    MULTI CRITERION PRIORITY ON KRIGING OF GOLD RESOURCES PREDICTION

    Get PDF
    This paper describes of three things. First, the Kriging estimation on gold grade which is distributed in the vein. The empirical variogram method based on Matheron classical and robust of Cressie-Hawkins. The two empirical fitting on variogram theory of spherical and exponential equations of weighted least squares and ordinary least squares used. The predictions of six sizes block-Kriging respectively, 15×15, 25×25, 35×35, 50×50, 75×75 and 100×100 based on four variographic models. Second, determine the priority of 24 prediction combinations based on TOPSIS method. Finally, the multiple criterion decision making method namely, 15×15 block Kriging based on a robust empirical variogram of exponential weighted least squares model represents as the best result

    Bayesian Saltwater Intrusion Prediction and Remediation Design under Uncertainty

    Get PDF
    Groundwater resources are vital for sustainable economic and demographic developments. Reliable prediction of groundwater head and contaminant transport is necessary for sustainable management of the groundwater resources. However, the groundwater simulation models are subjected to uncertainty in their predictions. The goals of this research are to: (1) quantify the uncertainty in the groundwater model predictions and (2) investigate the impact of the quantified uncertainty on the aquifer remediation designs. To pursue the first goal, this study generalizes the Bayesian model averaging (BMA) method and introduces the hierarchical Bayesian model averaging (HBMA) method that segregates and prioritizes sources of uncertainty in a hierarchical structure and conduct BMA for saltwater intrusion prediction. A BMA tree of models is developed to understand the impact of individual sources of uncertainty and uncertainty propagation on model predictions. The uncertainty analysis using HBMA leads to finding the best modeling proposition and to calculating the relative and absolute model weights. To pursue the second goal of the study, the chance-constrained (CC) programming is proposed to deal with the uncertainty in the remediation design. Prior studies of CC programming for the groundwater remediation designs are limited to considering parameter estimation uncertainty. This study combines the CC programming with the BMA and HBMA methods and proposes the BMA-CC framework and the HBMA-CC framework to also include the model structure uncertainty in the CC programming. The results show that the prediction variances from the parameter estimation uncertainty are much smaller than those from the model structure uncertainty. Ignoring the model structure uncertainty in the remediation design may lead to overestimating the design reliability, which can cause design failure

    Geodetic Network Design for Low-Cost GNSS

    Get PDF
    Global Navigation Satellite System (GNSS) stations are a cornerstone of modern geodetic surveys, providing high temporal-frequency, sub-centimetric three-component measurements of surface displacement at fixed locations. However, the high cost of each instrument limits both spatial resolution and access for small-scale users. Low-cost GNSS stations, in particular single-frequency instruments, provide a cheaper alternative to conventional systems, enabling the deployment of larger GNSS networks. Increased observation density around continental fault zones would improve our ability to recover distributed aseismic slip, in particular afterslip, on continental faults, which may be poorly constrained by other geodetic techniques such as InSAR. To best recover aseismic slip using low-cost GNSS stations, a method for the estimation of optimal network layouts is required. For single-frequency GNSS stations, which present the greatest potential for increased GNSS network density, the reduced positional accuracy as a result of ionospheric delay must also be addressed. In this work, a method for the automated design of single- and dual-frequency GNSS networks to recover distributed aseismic slip on continental faults is presented. Network layouts are generated using particle swarm optimisation and a criterion matrix technique to minimise the uncertainties on modelled slip values, relative to "best possible" values. These are estimated through non-uniform fault discretisation, in which a multi-objective genetic algorithm is utilised to explore the trade-off between the complexity of the discretisation and the associated model uncertainties. The reduced positional accuracy of single-frequency GNSS stations is mitigated through the network design, and an understanding of the spatial structure of the ionospheric delay. Initial results demonstrate the potential of low-cost GNSS stations, in particular single-frequency GNSS stations, to recover distributed aseismic slip on continental faults. Future work should expand the methodology to included slip across multiple faults, and the generation of mixed GNSS networks

    Advanced of Mathematics-Statistics Methods to Radar Calibration for Rainfall Estimation; A Review

    Get PDF
    Ground-based radar is known as one of the most important systems for precipitation measurement at high spatial and temporal resolutions. Radar data are recorded in digital manner and readily ingested to any statistical analyses. These measurements are subjected to specific calibration to eliminate systematic errors as well as minimizing the random errors, respectively. Since statistical methods are based on mathematics, they offer more precise results and easy interpretation with lower data detail. Although they have challenge to interpret due to their mathematical structure, but the accuracy of the conclusions and the interpretation of the output are appropriate. This article reviews the advanced methods in using the calibration of ground-based radar for forecasting meteorological events include two aspects: statistical techniques and data mining. Statistical techniques refer to empirical analyses such as regression, while data mining includes the Artificial Neural Network (ANN), data Kriging, Nearest Neighbour (NN), Decision Tree (DT) and fuzzy logic. The results show that Kriging is more applicable for interpolation. Regression methods are simple to use and data mining based on Artificial Intelligence is very precise. Thus, this review explores the characteristics of the statistical parameters in the field of radar applications and shows which parameters give the best results for undefined cases. DOI: 10.17762/ijritcc2321-8169.15012

    Outils d'aide à l'optimisation des campagnes d'essais non destructifs sur ouvrages en béton armé

    Get PDF
    Non-destructive testing methods (NDT) are essential for estimating concrete properties (mechanical or physical) and their spatial variability. They also constitute an useful tool to reduce the budget auscultation of a structure. The proposed approach is included in an ANR project (EvaDéOS) whose objective is to optimize the monitoring of civil engineering structures by implementing preventive maintenance to reduce diagnosis costs. In this thesis, the objective was to characterize at best a peculiar property of concrete (e.g. mechanical strength, porosity, degree of saturation, etc.), with technical ND sensitive to the same properties. For this aim, it is imperative to develop objective tools that allow to rationalize a test campaign on reinforced concrete structures. For this purpose, first, it is proposed an optimal spatial sampling tool to reduce the number of auscultation points. The most commonly used algorithm is the spatial simulated annealing (SSA). This procedure is regularly used in geostatistical applications, and in other areas, but yet almost unexploited for civil engineering structures. In the thesis work, an original optimizing spatial sampling method (OSSM) inspired in the SSA and based on the spatial correlation was developed and tested in the case of on-site auscultation with two complementary fitness functions: mean prediction error and the error on the estimation of the global variability. This method is divided into three parts. First, the spatial correlation of ND measurements is modeled by a variogram. Then, the relationship between the number of measurements organized in a regular grid and the objective function is determined using a spatial interpolation method called kriging. Finally, the OSSM algorithm is used to minimize the objective function by changing the positions of a smaller number of ND measurements and for obtaining at the end an optimal irregular grid. Destructive testing (DT) are needed to corroborate the information obtained by the ND measurements. Because of the cost and possible damage to the structure, an optimal sampling plan to collect a limited number of cores is important. For this aim, a procedure using data fusion based on the theory of possibilities and previously developed is used to estimate the properties of concrete from the ND. Through a readjustment bias requiring DTs performed on carrots, it is calibrated. Knowing that there is uncertainty about the results of DTs performed on carrots, it is proposed to take into account this uncertainty and propagate it through the calibration on the results of the fused data. By propagating this uncertainty, it is obtained mean fused values with a standard deviation. One can thus provide a methodology for positioning and minimizing the number of cores required to auscultate a structure by two methods: first, using the OSSM for the results of fused properties values in each measuring point and the second by the minimization of the average standard deviation over all of the fused points obtained after the propagation of DTs uncertainties. Finally, in order to propose an alternative to the possibility theory, neural networks are also tested as alternative methods for their relevance and usability.Les méthodes de contrôle non destructif (CND) sont essentielles pour estimer les propriétés du béton (mécaniques ou physiques) et leur variabilité spatiale. Elles constituent également un outil pertinent pour réduire le budget d'auscultation d'un ouvrage d'art. La démarche proposée est incluse dans un projet ANR (EvaDéOS) dont l'objectif est d'optimiser le suivi des ouvrages de génie civil en mettant en œuvre une maintenance préventive afin de réduire les coûts. Dans le cas du travail de thèse réalisé, pour caractériser au mieux une propriété particulière du béton (ex : résistance mécanique, porosité, degré de Saturation, etc.), avec des méthodes ND sensibles aux mêmes propriétés, il est impératif de développer des outils objectifs permettant de rationaliser une campagne d'essais sur les ouvrages en béton armé. Dans ce but, premièrement, il est proposé un outil d'échantillonnage spatial optimal pour réduire le nombre de points d'auscultation. L'algorithme le plus couramment employé est le recuit simulé spatial (RSS). Cette procédure est régulièrement utilisée dans des applications géostatistiques, et dans d'autres domaines, mais elle est pour l'instant quasiment inexploitée pour des structures de génie civil. Dans le travail de thèse, une optimisation de la méthode d'optimisation de l'échantillonnage spatial (MOES) originale inspirée du RSS et fondée sur la corrélation spatiale a été développée et testée dans le cas d'essais sur site avec deux fonctions objectifs complémentaires : l'erreur de prédiction moyenne et l'erreur sur l'estimation de la variabilité. Cette méthode est décomposée en trois parties. Tout d'abord, la corrélation spatiale des mesures ND est modélisée par un variogramme. Ensuite, la relation entre le nombre de mesures organisées dans une grille régulière et la fonction objectif est déterminée en utilisant une méthode d'interpolation spatiale appelée krigeage. Enfin, on utilise l'algorithme MOES pour minimiser la fonction objectif en changeant les positions d'un nombre réduit de mesures ND et pour obtenir à la fin une grille irrégulière optimale. Des essais destructifs (ED) sont nécessaires pour corroborer les informations obtenues par les mesures ND. En raison du coût ainsi que des dégâts possibles sur la structure, un plan d'échantillonnage optimal afin de prélever un nombre limité de carottes est important. Pour ce faire, une procédure utilisant la fusion des données fondée sur la théorie des possibilités et développée antérieurement, permet d'estimer les propriétés du béton à partir des ND. Par le biais d'un recalage nécessitant des ED réalisés sur carottes, elle est étalonnée. En sachant qu'il y a une incertitude sur le résultat des ED réalisés sur les carottes, il est proposé de prendre en compte cette incertitude et de la propager au travers du recalage sur les résultats des données fusionnées. En propageant ces incertitudes, on obtient des valeurs fusionnées moyennes par point avec un écart-type. On peut donc proposer une méthodologie de positionnement et de minimisation du nombre des carottes nécessaire pour ausculter une structure par deux méthodes : la première, en utilisant le MOES pour les résultats des propriétés sortis de la fusion dans chaque point de mesure et la seconde par la minimisation de l'écart-type moyen sur la totalité des points fusionnés, obtenu après la propagation des incertitudes des ED. Pour finir, afin de proposer une alternative à la théorie des possibilités, les réseaux de neurones sont également testés comme méthodes alternatives pour leur pertinence et leur simplicité d'utilisation

    Path Loss Predictions in the VHF and UHF Bands Within Urban Environments: Experimental Investigation of Empirical, Heuristics and Geospatial Models

    Full text link
    (c) 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.[EN] Deep knowledge of how radio waves behave in a practical wireless channel is required for effective planning and deployment of radio access networks in urban environments. Empirical propagation models are popular for their simplicity, but they are prone to introduce high prediction errors. Different heuristic methods and geospatial approaches have been developed to further reduce path loss prediction error. However, the efficacy of these new techniques in built-up areas should be experimentally verified. In this paper, the efficiencies of empirical, heuristic, and geospatial methods for signal fading predictions in the very high frequency (VHF) and ultra-high frequency (UHF) bands in typical urban environments are evaluated and analyzed. Electromagnetic field strength measurements are performed at different test locations within four selected cities in Nigeria. The data collected are used to develop path loss models based on artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), and Kriging techniques. The prediction results of the developed models are compared with those of selected empirical models and field measured data. Apart from Egli and ECC-33, the root mean squared error (RMSE) produced by all other models under investigation are considered acceptable. Specifically, the ANN and ANFIS models yielded the lowest prediction errors. However, the empirical models have the lowest standard deviation errors across all the bands. The findings of this study will help radio network engineers to achieve efficient radio coverage estimation; determine the optimal base station location; make a proper frequency allocation; select the most suitable antenna; and perform interference feasibility studies.This work was supported jointly by the funding received from IoT-Enabled Smart and Connected Communities (SmartCU) Research Cluster and the Center for Research, Innovation and Discovery (CUCRID) of Covenant University, Ota, Nigeria.Faruk, N.; Popoola, SI.; Surajudeen-Bakinde, NT.; Oloyede, AA.; Abdulkarim, A.; Olawoyin, LA.; Ali, M.... (2019). Path Loss Predictions in the VHF and UHF Bands Within Urban Environments: Experimental Investigation of Empirical, Heuristics and Geospatial Models. IEEE Access. 7:77293-77307. https://doi.org/10.1109/ACCESS.2019.2921411S7729377307
    corecore