602 research outputs found

    Realtime reservoir characterization and beyond: cyber-infrastructure tools and technologies

    Get PDF
    The advent of the digital oil _x000C_eld and rapidly decreasing cost of computing creates opportunities as well as challenges in simulation based reservoir studies, in particular, real-time reservoir characterization and optimization. One challenge our e_x000B_orts are directed toward is the use of real-time production data to perform live reservoir characterization using high throughput, high performance computing environments. To that end we developed the required tools of parallel reservoir simulator, parallel ensemble Kalman _x000C_lter and a scalable work ow manager. When using this collection of tools, a reservoir modeler is able to perform large scale reservoir management studies in short periods of time. This includes studies with thousands of models that are individually complex and large, involving millions of degrees of freedom. Using parallel processing, we are able to solve these models much faster than we otherwise would on a single, serial machine. This motivated the development of a fast parallel reservoir simulator. Furthermore, distributing those simulations across resources leads to a smaller total time to completion by making use of distributed processing. This allows the development of a scalable high throughput work ow manager. Finally, with thousands of models, each with millions of degrees of freedom, we end up with a super uity of model parameters. This translates directly to billions of degrees of freedom in the reservoir study. To be able to use the ensemble Kalman _x000C_lter on these models, we needed to develop a parallel implementation of the ensemble Kalman _x000C_lter. This thesis discusses the enabling tools and technologies developed to address a speci _x000C_c problem: how to accurately characterize reservoirs, using large numbers of complex detailed models. For these characterization studies to be helpful in making production decisions, the time to solution must be feasible. To that end, our work is focused on developing and extending these tools, and optimizing their performance

    Reservoir and lithofacies shale classification based on NMR logging

    Get PDF
    © 2020 Chinese Petroleum Society Shale gas reservoirs have fine-grained textures and high organic contents, leading to complex pore structures. Therefore, accurate well-log derived pore size distributions are difficult to acquire for this unconventional reservoir type, despite their importance. However, nuclear magnetic resonance (NMR) logging can in principle provide such information via hydrogen relaxation time measurements. Thus, in this paper, NMR response curves (of shale samples) were rigorously mathematically analyzed (with an Expectation Maximization algorithm) and categorized based on the NMR data and their geology, respectively. Thus the number of the NMR peaks, their relaxation times and amplitudes were analyzed to characterize pore size distributions and lithofacies. Seven pore size distribution classes were distinguished; these were verified independently with Pulsed-Neutron Spectrometry (PNS) well-log data. This study thus improves the interpretation of well log data in terms of pore structure and mineralogy of shale reservoirs, and consequently aids in the optimization of shale gas extraction from the subsurface

    Enhancing Rock Image Segmentation in Digital Rock Physics: A Fusion of Generative AI and State-of-the-Art Neural Networks

    Full text link
    In digital rock physics, analysing microstructures from CT and SEM scans is crucial for estimating properties like porosity and pore connectivity. Traditional segmentation methods like thresholding and CNNs often fall short in accurately detailing rock microstructures and are prone to noise. U-Net improved segmentation accuracy but required many expert-annotated samples, a laborious and error-prone process due to complex pore shapes. Our study employed an advanced generative AI model, the diffusion model, to overcome these limitations. This model generated a vast dataset of CT/SEM and binary segmentation pairs from a small initial dataset. We assessed the efficacy of three neural networks: U-Net, Attention-U-net, and TransUNet, for segmenting these enhanced images. The diffusion model proved to be an effective data augmentation technique, improving the generalization and robustness of deep learning models. TransU-Net, incorporating Transformer structures, demonstrated superior segmentation accuracy and IoU metrics, outperforming both U-Net and Attention-U-net. Our research advances rock image segmentation by combining the diffusion model with cutting-edge neural networks, reducing dependency on extensive expert data and boosting segmentation accuracy and robustness. TransU-Net sets a new standard in digital rock physics, paving the way for future geoscience and engineering breakthroughs

    The Influence of Intra-Array Wake Dynamics on Depth-Averaged Kinetic Tidal Turbine Energy Extraction Simulations

    Get PDF
    Assessing the tidal stream energy resource, its intermittency and likely environmental feedbacks due to energy extraction, relies on the ability to accurately represent kinetic losses in ocean models. Energy conversion has often been implemented in ocean models with enhanced turbine stress terms formulated using an array-averaging approach, rather than implementing extraction at device-scale. In depth-averaged models, an additional drag term in the momentum equations is usually applied. However, such array-averaging simulations neglect intra-array device wake interactions, providing unrealistic energy extraction dynamics. Any induced simulation error will increase with array size. For this study, an idealized channel is discretized at sub 10 m resolution, resolving individual device wake profiles of tidal turbines in the domain. Sensitivity analysis is conducted on the applied turbulence closure scheme, validating results against published data from empirical scaled turbine studies. We test the fine scale model performance of several mesh densities, which produce a centerline velocity wake deficit accuracy (R2) of 0.58–0.69 (RMSE = 7.16–8.28%) using a k-Ɛ turbulence closure scheme. Various array configurations at device scale are simulated and compared with an equivalent array-averaging approach by analyzing channel flux differential. Parametrization of array-averaging energy extraction techniques can misrepresent simulated energy transfer and removal. The potential peak error in channel flux exceeds 0.5% when the number of turbines nTECs ≈ 25 devices. This error exceeds 2% when simulating commercial-scale turbine array farms (i.e., >100 devices)

    COMPUTATIONAL FLUID DYNAMICS (CFD) MODELING AND VALIDATION OF DUST CAPTURE BY A NOVEL FLOODED BED DUST SCRUBBER INCORPORATED INTO A LONGWALL SHEARER OPERATING IN A US COAL SEAM

    Get PDF
    Dust is a detrimental, but unavoidable, consequence of any mining process. It is particularly problematic in underground coal mining, where respirable coal dust poses the potential health risk of coal workers’ pneumoconiosis (CWP). Float dust, if not adequately diluted with rock dust, also creates the potential for a dust explosion initiated by a methane ignition. Furthermore, recently promulgated dust regulations for lowering a miner’s exposure to respirable coal dust will soon call for dramatic improvements in dust suppression and capture. Computational fluid dynamics (CFD) results are presented for a research project with the primary goal of applying a flooded-bed dust scrubber, with high capture and cleaning efficiencies, to a Joy 7LS longwall shearer operating in a 7-ft (2.1 m) coal seam. CFD software, Cradle is used to analyze and evaluate airflow patterns and dust concentrations, under various arrangements and conditions, around the active mining zone of the shearer for maximizing the capture efficiency of the scrubber

    Zero-Shot Digital Rock Image Segmentation with a Fine-Tuned Segment Anything Model

    Full text link
    Accurate image segmentation is crucial in reservoir modelling and material characterization, enhancing oil and gas extraction efficiency through detailed reservoir models. This precision offers insights into rock properties, advancing digital rock physics understanding. However, creating pixel-level annotations for complex CT and SEM rock images is challenging due to their size and low contrast, lengthening analysis time. This has spurred interest in advanced semi-supervised and unsupervised segmentation techniques in digital rock image analysis, promising more efficient, accurate, and less labour-intensive methods. Meta AI's Segment Anything Model (SAM) revolutionized image segmentation in 2023, offering interactive and automated segmentation with zero-shot capabilities, essential for digital rock physics with limited training data and complex image features. Despite its advanced features, SAM struggles with rock CT/SEM images due to their absence in its training set and the low-contrast nature of grayscale images. Our research fine-tunes SAM for rock CT/SEM image segmentation, optimizing parameters and handling large-scale images to improve accuracy. Experiments on rock CT and SEM images show that fine-tuning significantly enhances SAM's performance, enabling high-quality mask generation in digital rock image analysis. Our results demonstrate the feasibility and effectiveness of the fine-tuned SAM model (RockSAM) for rock images, offering segmentation without extensive training or complex labelling

    Continuous reservoir model updating by ensemble Kalman filter on Grid computing architectures

    Get PDF
    A reservoir engineering Grid computing toolkit, ResGrid and its extensions, were developed and applied to designed reservoir simulation studies and continuous reservoir model updating. The toolkit provides reservoir engineers with high performance computing capacity to complete their projects without requiring them to delve into Grid resource heterogeneity, security certification, or network protocols. Continuous and real-time reservoir model updating is an important component of closed-loop model-based reservoir management. The method must rapidly and continuously update reservoir models by assimilating production data, so that the performance predictions and the associated uncertainty are up-to-date for optimization. The ensemble Kalman filter (EnKF), a Bayesian approach for model updating, uses Monte Carlo statistics for fusing observation data with forecasts from simulations to estimate a range of plausible models. The ensemble of updated models can be used for uncertainty forecasting or optimization. Grid environments aggregate geographically distributed, heterogeneous resources. Their virtual architecture can handle many large parallel simulation runs, and is thus well suited to solving model-based reservoir management problems. In the study, the ResGrid workflow for Grid-based designed reservoir simulation and an adapted workflow provide tools for building prior model ensembles, task farming and execution, extracting simulator output results, implementing the EnKF, and using a web portal for invoking those scripts. The ResGrid workflow is demonstrated for a geostatistical study of 3-D displacements in heterogeneous reservoirs. A suite of 1920 simulations assesses the effects of geostatistical methods and model parameters. Multiple runs are simultaneously executed using parallel Grid computing. Flow response analyses indicate that efficient, widely-used sequential geostatistical simulation methods may overestimate flow response variability when compared to more rigorous but computationally costly direct methods. Although the EnKF has attracted great interest in reservoir engineering, some aspects of the EnKF remain poorly understood, and are explored in the dissertation. First, guidelines are offered to select data assimilation intervals. Second, an adaptive covariance inflation method is shown to be effective to stabilize the EnKF. Third, we show that simple truncation can correct negative effects of nonlinearity and non-Gaussianity as effectively as more complex and expensive reparameterization methods
    corecore