4,213 research outputs found

    History Matching Using Principal Component Analysis

    Get PDF
    Imperial Users onl

    4D Seismic History Matching Incorporating Unsupervised Learning

    Full text link
    The work discussed and presented in this paper focuses on the history matching of reservoirs by integrating 4D seismic data into the inversion process using machine learning techniques. A new integrated scheme for the reconstruction of petrophysical properties with a modified Ensemble Smoother with Multiple Data Assimilation (ES-MDA) in a synthetic reservoir is proposed. The permeability field inside the reservoir is parametrised with an unsupervised learning approach, namely K-means with Singular Value Decomposition (K-SVD). This is combined with the Orthogonal Matching Pursuit (OMP) technique which is very typical for sparsity promoting regularisation schemes. Moreover, seismic attributes, in particular, acoustic impedance, are parametrised with the Discrete Cosine Transform (DCT). This novel combination of techniques from machine learning, sparsity regularisation, seismic imaging and history matching aims to address the ill-posedness of the inversion of historical production data efficiently using ES-MDA. In the numerical experiments provided, I demonstrate that these sparse representations of the petrophysical properties and the seismic attributes enables to obtain better production data matches to the true production data and to quantify the propagating waterfront better compared to more traditional methods that do not use comparable parametrisation techniques

    Generating Subsurface Earth Models using Discrete Representation Learning and Deep Autoregressive Network

    Full text link
    Subsurface earth models (referred to as geo-models) are crucial for characterizing complex subsurface systems. Multiple-point statistics are commonly used to generate geo-models. In this paper, a deep-learning-based generative method is developed as an alternative to the traditional Geomodel generation procedure. The generative method comprises two deep-learning models, namely the hierarchical vector-quantized variational autoencoder (VQ-VAE-2) and PixelSNAIL autoregressive model. Based on the principle of neural discrete representation learning, the VQ-VAE-2 learns to massively compress the Geomodels to extract the low-dimensional, discrete latent representation corresponding to each Geomodel. Following that, PixelSNAIL uses the deep autoregressive network to learn the prior distribution of the latent codes. For the purpose of Geomodel generation, PixelSNAIL samples from the newly learned prior distribution of latent codes, and then the decoder of the VQ-VAE-2 converts the newly sampled latent code to a newly constructed geo-model. PixelSNAIL can be used for unconditional or conditional geo-model generation. In an unconditional generation, the generative workflow generates an ensemble of geo-models without any constraint. On the other hand, in the conditional geo-model generation, the generative workflow generates an ensemble of geo-models similar to a user-defined source image, which ultimately facilitates the control and manipulation of the generated geo-models. To better construct the fluvial channels in the geo-models, the perceptual loss is implemented in the VQ-VAE-2 model instead of the traditional mean squared error loss. At a specific compression ratio, the quality of multi-attribute geo-model generation is better than that of single-attribute geo-model generation

    Making the most of data:An information selection and assessment framework to improve water systems operations

    Get PDF
    Advances in Environmental monitoring systems are making a wide range of data available at increasingly higher temporal and spatial resolution. This creates an opportunity to enhance real-time understanding of water systems conditions and to improve prediction of their future evolution, ultimately increasing our ability to make better decisions. Yet, many water systems are still operated using very simple information systems, typically based on simple statistical analysis and the operator’s experience. In this work, we propose a framework to automatically select the most valuable information to inform water systems operations supported by quantitative metrics to operationally and economically assess the value of this information. The Hoa Binh reservoir in Vietnam is used to demonstrate the proposed framework in a multiobjective context, accounting for hydropower production and flood control. First, we quantify the expected value of perfect information, meaning the potential space for improvement under the assumption of exact knowledge of the future system conditions. Second, we automatically select the most valuable information that could be actually used to improve the Hoa Binh operations. Finally, we assess the economic value of sample information on the basis of the resulting policy performance. Results show that our framework successfully select information to enhance the performance of the operating policies with respect to both the competing objectives, attaining a 40% improvement close to the target trade-off selected as potentially good compromise between hydropower production and flood control

    Data-Driven Model Reduction for the Bayesian Solution of Inverse Problems

    Get PDF
    One of the major challenges in the Bayesian solution of inverse problems governed by partial differential equations (PDEs) is the computational cost of repeatedly evaluating numerical PDE models, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. This paper proposes a data-driven projection-based model reduction technique to reduce this computational cost. The proposed technique has two distinctive features. First, the model reduction strategy is tailored to inverse problems: the snapshots used to construct the reduced-order model are computed adaptively from the posterior distribution. Posterior exploration and model reduction are thus pursued simultaneously. Second, to avoid repeated evaluations of the full-scale numerical model as in a standard MCMC method, we couple the full-scale model and the reduced-order model together in the MCMC algorithm. This maintains accurate inference while reducing its overall computational cost. In numerical experiments considering steady-state flow in a porous medium, the data-driven reduced-order model achieves better accuracy than a reduced-order model constructed using the classical approach. It also improves posterior sampling efficiency by several orders of magnitude compared to a standard MCMC method

    An efficient polynomial chaos-based proxy model for history matching and uncertainty quantification of complex geological structures

    Get PDF
    A novel polynomial chaos proxy-based history matching and uncertainty quantification method is presented that can be employed for complex geological structures in inverse problems. For complex geological structures, when there are many unknown geological parameters with highly nonlinear correlations, typically more than 106 full reservoir simulation runs might be required to accurately probe the posterior probability space given the production history of reservoir. This is not practical for high-resolution geological models. One solution is to use a "proxy model" that replicates the simulation model for selected input parameters. The main advantage of the polynomial chaos proxy compared to other proxy models and response surfaces is that it is generally applicable and converges systematically as the order of the expansion increases. The Cameron and Martin theorem 2.24 states that the convergence rate of the standard polynomial chaos expansions is exponential for Gaussian random variables. To improve the convergence rate for non-Gaussian random variables, the generalized polynomial chaos is implemented that uses an Askey-scheme to choose the optimal basis for polynomial chaos expansions [199]. Additionally, for the non-Gaussian distributions that can be effectively approximated by a mixture of Gaussian distributions, we use the mixture-modeling based clustering approach where under each cluster the polynomial chaos proxy converges exponentially fast and the overall posterior distribution can be estimated more efficiently using different polynomial chaos proxies. The main disadvantage of the polynomial chaos proxy is that for high-dimensional problems, the number of the polynomial chaos terms increases drastically as the order of the polynomial chaos expansions increases. Although different non-intrusive methods have been developed in the literature to address this issue, still a large number of simulation runs is required to compute high-order terms of the polynomial chaos expansions. This work resolves this issue by proposing the reduced-terms polynomial chaos expansion which preserves only the relevant terms in the polynomial chaos representation. We demonstrated that the sparsity pattern in the polynomial chaos expansion, when used with the Karhunen-Loéve decomposition method or kernel PCA, can be systematically captured. A probabilistic framework based on the polynomial chaos proxy is also suggested in the context of the Bayesian model selection to study the plausibility of different geological interpretations of the sedimentary environments. The proposed surrogate-accelerated Bayesian inverse analysis can be coherently used in practical reservoir optimization workflows and uncertainty assessments

    Probabilistic load forecasting with Reservoir Computing

    Full text link
    Some applications of deep learning require not only to provide accurate results but also to quantify the amount of confidence in their prediction. The management of an electric power grid is one of these cases: to avoid risky scenarios, decision-makers need both precise and reliable forecasts of, for example, power loads. For this reason, point forecasts are not enough hence it is necessary to adopt methods that provide an uncertainty quantification. This work focuses on reservoir computing as the core time series forecasting method, due to its computational efficiency and effectiveness in predicting time series. While the RC literature mostly focused on point forecasting, this work explores the compatibility of some popular uncertainty quantification methods with the reservoir setting. Both Bayesian and deterministic approaches to uncertainty assessment are evaluated and compared in terms of their prediction accuracy, computational resource efficiency and reliability of the estimated uncertainty, based on a set of carefully chosen performance metrics
    corecore