4 research outputs found

    Geostatistical simulations with heterotopic hard and soft data without modeling the linear model of coregionalization

    Get PDF
    Most mining decisions are based on models estimated/simulated given the information obtained from samples. During the exploration stage, samples are commonly taken using diamond drill holes which are accurate and precise. These samples are considered hard data. In the production stage, new samples are added. These last are cheaper and more abundant than the drill hole samples, but imprecise and are here named as soft data. Usually hard and soft data are not sampled at the same locations, they form a heterotopic dataset. This article proposes a framework for geostatistical simulation with completely heterotopic soft data. The simulation proceeds in two steps. First, the variable of interest at the locations where soft data are available is simulated. The local conditional distributions built at these locations consider both hard and soft data and are obtained using simple cokriging with the intrinsic coregionalization model. Second, the variable of interest in the entire simulation grid using the original and previously simulated values at soft data locations is simulated. The results show that the information from soft data improved both the accuracy and precision of the simulated models. The proposed framework is illustrated by a case study with data obtained from an underground copper mine

    Spatio-temporal modeling of groundwater quality deterioration and resource depletion

    Get PDF
    In Hydrogeology, the analysis of groundwater features is based on multiple data related to correlated variables recorded over a spatio-temporal domain. Thus, multivariate geostatistical tools are fundamental for assessment of the data variability in space and time, as well as for parametric and nonparametric modeling. In this work, three key hydrological indicators of the quality of groundwater-sodium adsorption ratio, chloride and electrical conductivity-as well as the phreatic level, in the unconfined aquifer of the central area of Veneto Region (Italy) are investigated and modeled for prediction purposes. By using a new geostatistical approach, probability maps of groundwater resource deterioration are computed, and some areas where the aquifer needs strong attention are identified in the north-east part of the study region. The proposed analytical methodology and the findings can support policy makers in planning actions aimed at sustainable water management, which should enable better monitoring of groundwater used for drinking and also ensure high quality of water for irrigation purposes

    Geostatistical integration of core and well log data for high-resolution reservoir modeling

    Get PDF
    Title from PDF of title page, viewed on June 1, 2012Thesis advisor: Jejung LeeVitaIncludes bibliographic references (p. 106-110)Thesis (M.S.)--Dept. of Geosciences. University of Missouri--Kansas City, 2012Analyzing data derived from well logging and core plugs to understand the heterogeneity of porosity in geologic formations is paramount in petrological studies. The well-log data and core-plug data are integrated in order to generate an accurate model describing the porosity distribution; however these data exist at different scales and resolution. This difference necessitates scaling of one or both sets of the data to aid in integration. The present study established a geostatistical scaling (GS) model combining mean, variance, skewness, kurtosis and standard deviation with a misfit algorithm and sequential Gaussian simulation to integrate porosity data in conjunction with correlating the depth of core-plug data within the well-log data through a scaling process. The GS model examined well-log porosity data from a Permian-age formation in the Hugoton Embayment in Kansas and well log data from a Cretaceous-age formation in the GyeongSang Basin in The Republic of Korea. Synthetic core-plug porosity data was generated from well-log data with random number generation. The GS model requires basic histograms and variogram models for scaling the computerized tomography (CT) plug data to well log scale as well as integrating the data in a sequential Gaussian simulation. Variance-based statistics were calculated within specific intervals, based on the CT plug size, then a best fit for depth correlation determined. A new correlation algorithm, named the multiplicative inverse misfit correlation method (MIMC), was formulated for accurate depth correlation. This associated depth then constrained the well log porosity data at reservoir- or field-scale to interpolate higher-resolution porosity distributions. Results for all the wells showed the MIMC method accurately identified the depth from which the CT plug data originated. The porosity from the CT plug data was applied in a sequential Gaussian co-simulation, after kriging the well log data. This culminated in a greater refinement in determining the higher porosities distributions than the interpolation of solely the well log data. These results validate the proposed high-resolution model for integrating data and correlating depths in reservoir characterization.Introduction -- Geostatistical framework -- Formulation of the geostatistical scaling model -- Applications of the model: case studies -- Discussion of results -- Conclusion -- Appendi

    Geoestatística na ausência de hard data : lidando com o erro amostral

    Get PDF
    Na geoestatística, são chamados de hard data as observações do fenômeno de interesse que sejam isentas de erro ou assumidas como tal. No entanto, tal tipo de dado não pode ser obtido experimentalmente, pois o erro amostral é intrinsicamente associado a qualquer processo de amostragem. Em dados reais, erros amostrais com variância correspondendo de 10% a 40% da variância total, os dados são considerados como boas práticas ou benchmarks, sendo, então, comumente assumidos como isentos de erro nas rotinas geoestatísticas. A proposta do trabalho é investigar se a hipótese de que assumir em problemas geoestatísticos dados reais como hard data é incorreta. Estatísticas como correlação espacial, as distribuições e a estrutura de correlação medida através de observações são combinações do comportamento do fenômeno real com o dos erros e, portanto, realizações estocásticas condicionadas a honrar os parâmetros das observações não são equiprováveis ao fenômeno real. Enquanto os fluxos de trabalhos convencionais geram realizações condicionadas a honrar os parâmetros e valores dos dados, esta tese apresenta uma série de métodos que possibilitam utilizar observações afetadas por erros para gerar realizações equiprováveis ao fenômeno real. A tese é separada em cinco partes: (i) é desenvolvido um modelo de erros generalizado, tanto univariado quanto multivariado; (ii) São apresentadas alternativas para estimar o erro associado a cada medição; (iii) o covariograma e a distribuição do fenômeno real são inferidos através dos valores amostrados, dos seus erros estimados e do covariograma e distribuição ajustada ao valores amostrados. No caso multivariado, também é inferida a estrutura de correlação entre as variáveis; (iv) bancos de dados de hard data são gerados ao substituir as observações iniciais por simulações de possíveis valores do fenômeno real. Cada banco de dados é utilizado para simular o fenômeno de interesse em todo o domínio, sendo tanto os bancos de dados quanto as realizações do modelo condicionadas a reproduzir estatísticas inferidas do fenômeno real. Por fim, a parte (v) apresenta as conclusões e propostas de novos trabalhos. Na tese são apresentados diversos exemplos como forma de elucidar o método e demonstrar o impacto e relevância de cada etapa. Os resultados do método proposto são a geração de modelos realmente equiprováveis ao fenômeno real e espaços de incerteza que reproduzem melhor a verdadeira distância entre o modelo e a realidade.Sampling error with variance corresponding to 10% to 40% of the total dataset variance are considered as good practice and frequently assumed as error-free data in the geostatistical workflow. The sampling error is intrinsically associated with any sampling process. Therefore, it is impossible to obtain in practice error-free observations of the phenomenon of interest (named as hard data). This thesis investigates the hypothesis that assuming the existence of hard data in geostatistical problems is incorrect and that impacts the quality of the simulated models. Spatial correlation, distribution and the structure of correlation measured by samples combine the real behavior of the underlying-true phenomenon and the sampling error behaviour. Realizations conditioned to honor the parameters fitted to observations are not equiprobable to the underlying true phenomenon. This thesis presents a number of methods that correctly manage data affected by sampling error. The thesis is separated into five parts: (i) a generalized error model that deals either with univariate and multivariate data is developed. In the multivariate case, the error associated with observations of different variables can be correlated; (ii) alternatives are presented to estimate the error associated with each observation; (iii) the covariogram and the distribution of the underlying-true phenomenon are inferred through statistics adjusted to the observations, their estimated associated errors, and the error behaviour associated to each observation. In the multivariate case, the structure of correlation between pairs of variables is inferred; (iv) hard data cannot be sampled, but equiprobable hard data values can be simulated. Data sets are generated by replacing initial observations by simulations of hard data. All realizations generated in the simulation steps are conditioned to reproduce the inferred statistics of the underlying true phenomenon. In the last part (v), discussions ands conclusions are presented. For sake of clarity, several short examples are presented to elucidate the method and demonstrate the impact and relevance of each step. The results of the proposed method are the generation of models with a better reproduction of what is supposed to be equiprobable realizations of the underlying true process, as well as improve the simulated space of uncertainty between the model and reality
    corecore