1,071 research outputs found

    Machine Learning Assisted Framework for Advanced Subsurface Fracture Mapping and Well Interference Quantification

    Get PDF
    The oil and gas industry has historically spent significant amount of capital to acquire large volumes of analog and digital data often left unused due to lack of digital awareness. It has instead relied on individual expertise and numerical modelling for reservoir development, characterization, and simulation, which is extremely time consuming and expensive and inevitably invites significant human bias and error into the equation. One of the major questions that has significant impact in unconventional reservoir development (e.g., completion design, production, and well spacing optimization), CO2 sequestration in geological formations (e.g., well and reservoir integrity), and engineered geothermal systems (e.g., maximizing the fluid flow and capacity of the wells) is to be able to quantify and map the subsurface natural fracture systems. This needs to be done both locally, i.e., near the wellbore and generally in the scale of the wellpad, or region. In this study, the conventional near wellbore natural fracture mapping techniques is first discussed and integrated with more advanced technologies such as application of fiber optics, specifically Distributed Acoustic Sensing (DAS) and Distributed Strain Sensing (DSS), to upscale the fracture mapping in the region. Next, a physics-based automated machine learning (AutoML) workflow is developed that incorporates the advanced data acquisition system that collects high-resolution drilling acceleration data to infer the near well bore natural fracture intensities. The new AutoML workflow aims to minimize human bias and accelerate the near wellbore natural fracture mapping in real time. The new AutoML workflow shows great promise by reducing the fracture mapping time and cost by 10-fold and producing more accurate, robust, reproducible, and measurable results. Finally, to completely remove human intervention and consequently accelerate the process of fracture mapping while drilling, the application of computer vision and deep learning techniques in new workflows to automate the process of identifying natural fractures and other lithological features using borehole image logs were integrated. Different structures and workflows have been tested and two specific workflows are designed for this purpose. In the first workflow, the fracture footprints on actual acoustic image logs (i.e., full, or partial sigmoidal signatures with a range of amplitude and vertical and horizontal displacement) is detected and classified in different categories with varying success. The second workflow implements the actual amplitude values recorded by the borehole image log and the binary representation of the produced images to detect and quantify the major fractures and beddings. The first workflow is more detailed and capable of identifying different classes of fractures albeit computationally more expensive. The second workflow is faster in detecting the major fractures and beddings. In conclusion, regional subsurface natural fracture mapping technique using an integration of conventional logging, microseismic, and fiber optic data is presented. A new AutoML workflow designed and tested in a Marcellus Shale gas reservoir was used to predict near wellbore fracture intensities using high frequency drilling acceleration data. Two integrated workflows were designed and validated using 3 wells in Marcellus Shale to extract natural fractures from acoustic image logs and amplitude recordings obtained during logging while drilling. The new workflows have: i) minimized human bias in different aspects of fracture mapping from image log analysis to machine learning model selection and hyper parameter optimization; ii) generated and quantified more accurate fracture predictions using different score matrices; iii) decreased the time and cost of the fracture interpretation by tenfold, and iv) presented more robust and reproducible results

    Advancing Carbon Sequestration through Smart Proxy Modeling: Leveraging Domain Expertise and Machine Learning for Efficient Reservoir Simulation

    Get PDF
    Geological carbon sequestration (GCS) offers a promising solution to effectively manage extra carbon, mitigating the impact of climate change. This doctoral research introduces a cutting-edge Smart Proxy Modeling-based framework, integrating artificial neural networks (ANNs) and domain expertise, to re-engineer and empower numerical reservoir simulation for efficient modeling of CO2 sequestration and demonstrate predictive conformance and replicative capabilities of smart proxy modeling. Creating well-performing proxy models requires extensive human intervention and trial-and-error processes. Additionally, a large training database is essential to ANN model for complex tasks such as deep saline aquifer CO2 sequestration since it is used as the neural network\u27s input and output data. One major limitation in CCS programs is the lack of real field data due to a lack of field applications and issues with confidentiality. Considering these drawbacks, and due to high-dimensional nonlinearity, heterogeneity, and coupling of multiple physical processes associated with numerical reservoir simulation, novel research to handle these complexities as it allows for the creation of possible CO2 sequestration scenarios that may be used as a training set. This study addresses several types of static and dynamic realistic and practical field-base data augmentation techniques ranging from spatial complexity, spatio-temporal complexity, and heterogeneity of reservoir characteristics. By incorporating domain-expertise-based feature generation, this framework honors precise representation of reservoir overcoming computational challenges associated with numerical reservoir tools. The developed ANN accurately replicated fluid flow behavior, resulting in significant computational savings compared to traditional numerical simulation models. The results showed that all the ML models achieved very good accuracies and high efficiency. The findings revealed that the quality of the path between the focal cell and injection wells emerged as the most crucial factor in both CO2 saturation and pressure estimation models. These insights significantly contribute to our understanding of CO2 plume monitoring, paving the way for breakthroughs in investigating reservoir behavior at a minimal computational cost. The study\u27s commitment to replicating numerical reservoir simulation results underscores the model\u27s potential to contribute valuable insights into the behavior and performance of CO2 sequestration systems, as a complimentary tool to numerical reservoir simulation when there is no measured data available from the field. The transformative nature of this research has vast implications for advancing carbon storage modeling technologies. By addressing the computational limitations of traditional numerical reservoir models and harnessing the synergy between machine learning and domain expertise, this work provides a practical workflow for efficient decision-making in sequestration projects

    Machine learning for the subsurface characterization at core, well, and reservoir scales

    Get PDF
    The development of machine learning techniques and the digitization of the subsurface geophysical/petrophysical measurements provides a new opportunity for the industries focusing on exploration and extraction of subsurface earth resources, such as oil, gas, coal, geothermal energy, mining, and sequestration. With more data and more computation power, the traditional methods for subsurface characterization and engineering that are adopted by these industries can be automized and improved. New phenomenon can be discovered, and new understandings may be acquired from the analysis of big data. The studies conducted in this dissertation explore the possibility of applying machine learning to improve the characterization of geological materials and geomaterials. Accurate characterization of subsurface hydrocarbon reservoirs is essential for economical oil and gas reservoir development. The characterization of reservoir formation requires the integration interpretation of data from different sources. Large-scale seismic measurements, intermediate-scale well logging measurements, and small-scale core sample measurements help engineers understand the characteristics of the hydrocarbon reservoirs. Seismic data acquisition is expensive and core samples are sparse and have limited volume. Consequently, well log acquisition provides essential information that improves seismic analysis and core analysis. However, the well logging data may be missing due to financial or operational challenges or may be contaminated due to complex downhole environment. At the near-wellbore scale, I solve the data constraint problem in the reservoir characterization by applying machine learning models to generate synthetic sonic traveltime and NMR logs that are crucial for geomechanical and pore-scale characterization, respectively. At the core scale, I solve the problems in fracture characterization by processing the multipoint sonic wave propagation measurements using machine learning to characterize the dispersion, orientation, and distribution of cracks embedded in material. At reservoir scale, I utilize reinforcement learning models to achieve automatic history matching by using a fast-marching-based reservoir simulator to estimate reservoir permeability that controls pressure transient response of the well. The application of machine learning provides new insights into traditional subsurface characterization techniques. First, by applying shallow and deep machine learning models, sonic logs and NMR T2 logs can be acquired from other easy-to-acquire well logs with high accuracy. Second, the development of the sonic wave propagation simulator enables the characterization of crack-bearing materials with the simple wavefront arrival times. Third, the combination of reinforcement learning algorithms and encapsulated reservoir simulation provides a possible solution for automatic history matching

    Physics Constrained Data-Driven Technique for Reservoir Proxy Model and Model Order Reduction

    Get PDF
    In reservoir engineering, data-driven methodologies have been applied successfully to infer interwell connections and flow patterns in the subsurface, model order reduction of reservoir simulations, and in assisting field development plans, including, history matching and performance prediction phases, of conventional and unconventional reservoirs. In this work, we propose to utilize data driven methods for achieving two main objectives: (1) enhance model order reduction (MOR) techniques accounting for sparsity in the data; and (2) reservoir simulation proxy model development based solely on data. For the first objective, fast simulation algorithms based on reduced-order modeling have been developed in order to facilitate large-scale and complex computationally intensive reservoir simulation and optimization. Methods like proper orthogonal decomposition (POD) and Dynamic Mode Decomposition (DMD) have been successfully used to efficiently capture and predict the behavior of reservoir fluid flow. Non-intrusive techniques (e.g., DMD), are especially attractive as it is a data-driven approach that do not require code modifications (equation free). To achieve our first objective with the concept of sparsity in statistical learning, we further enhance the performance and reduce the dimension of standard DMD, by investigating sparse approximations of the snapshots. The method to achieve the second objective can further be classified into two categories: (1) building proxy model by system identification method; and (2) end to end production prediction with machine learning techniques. Although real-time data acquisition and analysis, are becoming routine in many workflows (such as in reservoir simulations), there is still a disconnect between raw data and the traditional theoretical first laws principles, whereby conservation laws and phenomenological behavior are used to derive the underlying spatio-temporal evolution equations. We propose to combine sparsity promoting methods and machine learning techniques to find the governing equation from the spatio-temporal data series from a reservoir simulator. The idea is to connect data with the physical interpretation of the dynamical system. We achieve this by identifying the nonlinear ODE system equations of our discretized reservoir system. In addition, as production prediction analysis has been the ultimate goal of many reservoir simulation/modeling, various types reservoir simulation has been developed to build efficient and accurate model to provide the most information about reserves and aid in decision making process. The other proxy model we developed is benefit from the evolution of machine learning technique and increasing availability of extensive amounts of historical data. A powerful technique called recurrent neural network (RNN) has been proved useful for modeling with sequence data. We apply RNN on analyzing control parameter data and synthetic historical production data for better reservoir characterization and prediction. All of the above mentioned MOR and proxy model development will be tested on single- and two-phase fluid flow reservoir simulation problem

    Auto-detection interpretation model for horizontal oil wells using pressure transient responses

    Get PDF
    Directional drilling is an excellent option to extend the limited reservoir reach and contact offered by vertical wells. Pressure transient responses (PTR) of horizontal wells provide key information about the reservoirs drilled. In this study multilayer perceptron (MLP) neural networks are used to correctly identify reservoir models from pressure derivative curves derived from horizontal wells. To this end, 2560 pressure derivative curves for six distinct reservoir models are generated and used to design a machine-learning classifier. A single hidden layer MLP network with 5 neurons, trained with a scaled conjugate gradient algorithm, is selected as the best classifier. This smart classifier provides total classification accuracy of 98.3%, mean square error of 0.00725, and coefficient of determination of 0.97332 over the whole dataset. Performance accuracy of the proposed classifier is verified with real field data, synthetically generated noisy PTR, and some signals outside the range initially assessed by the training plus testing data subsets. The developed network can correctly identify the reservoir-flow model with a probability of close to 0.9. The novelty of this work is that it employs a large dataset of horizontal (not vertical) well tests applied to six reservoir-flow models and includes noisy data to train and verify a neural network model to reliably achieve a high-level of prediction accuracy.CIted as: Moosavi, S.R., Vaferi, B., Wood, D.A. Auto-detection interpretation model for horizontal oil wells using pressure transient responses. Advances in Geo-Energy Research, 2020, 4(3): 305-316, doi: 10.46690/ager.2020.03.08    

    Enhancing wettability prediction in the presence of organics for hydrogen geo-storage through data-driven machine learning modeling of rock/H2/brine systems

    Get PDF
    The success of geological H2 storage relies significantly on rock–H2–brine interactions and wettability. Experimentally assessing the H2 wettability of storage/caprocks as a function of thermos-physical conditions is arduous because of high H2 reactivity and embrittlement damages. Data-driven machine learning (ML) modeling predictions of rock–H2–brine wettability are less strenuous and more precise. They can be conducted at geo-storage conditions that are impossible or hazardous to attain in the laboratory. Thus, ML models were utilized in this research to accurately model the wettability behavior of a ternary system consisting of H2, rock minerals (quartz and mica), and brine at different operating geological conditions. The results revealed that the ML models accurately captured the wettability behavior at different geo-storage conditions by yielding less than 5% mean absolute percent error and above 0.95 coefficient of determination values. The partial dependency or sensitivity plots were generated to evaluate the impact of individual features on the trained models. These plots revealed that the models accurately captured the physics behind the problem. Furthermore, a mathematical equation is derived from the trained ML model to predict the wettability behavior without using any ML software. The accuracy of the predictions of the ML model can be beneficial for exactly predicting the H2 geo-storage capacities and assessing of H2 containment security of storage and caprocks for large-scale geo-storage projects

    Artificial Intelligence and Cognitive Computing

    Get PDF
    Artificial intelligence (AI) is a subject garnering increasing attention in both academia and the industry today. The understanding is that AI-enhanced methods and techniques create a variety of opportunities related to improving basic and advanced business functions, including production processes, logistics, financial management and others. As this collection demonstrates, AI-enhanced tools and methods tend to offer more precise results in the fields of engineering, financial accounting, tourism, air-pollution management and many more. The objective of this collection is to bring these topics together to offer the reader a useful primer on how AI-enhanced tools and applications can be of use in today’s world. In the context of the frequently fearful, skeptical and emotion-laden debates on AI and its value added, this volume promotes a positive perspective on AI and its impact on society. AI is a part of a broader ecosystem of sophisticated tools, techniques and technologies, and therefore, it is not immune to developments in that ecosystem. It is thus imperative that inter- and multidisciplinary research on AI and its ecosystem is encouraged. This collection contributes to that
    corecore