253 research outputs found

    Integration of Time Lapse Seismic Data Using Onset Time and Analysis of Spatial Resolution

    Get PDF
    Integration of time-lapse seismic data into the reservoir model offers great potential in understanding reservoir flow patterns as well as reservoir properties. However, it also requires the solution of an inverse problem, which poses challenges in terms of dynamic reservoir modeling and seismic history matching to infer reservoir characterization. In this dissertation, we first present a method for assessing the inversion results in underdetermined problems, resulting in a multi scale data integration, Then, we introduce a novel history matching approach to integrate frequent seismic surveys (4D) using onset times. In the first part, an analysis of spatial resolution is incorporated into an efficient history matching approach, in order to indicate the reliability of the estimated solution. By examining the spatial resolution in seismic data integration, as a function of derivation type, we evaluate quantitatively the contribution of pressure and saturation changes on the calibrated permeability field. Next, we present a novel and efficient approach to integrate frequent time lapse (4D) seismic data into high resolution reservoir models based on seismic onset times. Our approach reduces multiple time-lapse seismic survey data into a single map of onset times, leading to substantial data reduction for history matching while capturing all relevant information regarding fluid flow in the reservoir. We demonstrate the practical feasibility of our proposed approach through the heavy oil reservoir at Pad 31 in the Peace River Field (Alberta, Canada) with daily time lapse seismic surveys recorded by a permanently buried seismic monitoring system. Finally, we quantitatively investigate the effectiveness of the onset time and the amplitude inversion to solve the inverse problem associated with integrating 4D seismic data into the reservoir model. The results of the study demonstrate the effectiveness of the onset time approach for integrating a large number of seismic surveys by compressing them into a single map. Also, the onset times appear to be relatively insensitive to the petro elastic model but sensitive to the steam/fluid propagation, making it a robust method for history matching of time lapse surveys

    Closing the loop by engineering consistent 4D seismic to simulator inversion

    Get PDF
    The multi-disciplinary nature of closing the loop (CtL) between 4D seismic and reservoir engineering data requires integrated workflows to make sense of these different measurements. According to the published literatures, this integration is subject to significant inconsistency and uncertainty. To resolve this, an engineering consistent (EC) concept is proposed that favours an orderly workflow to modelling and inverting the 4D seismic response. Establishing such consistency facilitates a quantitative comparison between the reservoir model and the acquired 4D seismic data observation. With respect to the sim2seis workflow developed by Amini (2014), a corresponding inverse solution is proposed. The inversion, called seis2sim, utilises the model prediction as a priori information, searching for EC seismic answers in the joint domain between reservoir engineering and geophysics. Driven by a Bayesian algorithm, the inversion delivers more stable and certain elastic parameters upon application of the EC constraints. The seis2sim approach is firstly tested with a synthetic example derived from a real dataset before being applied to the Heidrun and Girassol field datasets. The two real data examples are distinctive from each other in terms of seismic quality, geological nature and production activities. After extracting the 3D and 4D impedance from the seismic data, CtL workflows are designed to update various aspects of the reservoir model according to the comparison between sim2seis and seis2sim. The discrepancy revealed by this cross-domain comparison is informative for robust updating of the reservoir model in terms reservoir geometry, volumetrics and connectivity. After applying tailored CtL workflows to the Heidrun and Girassol datasets, the statistical istributions of petrophysical parameters, such as porosity and NTG, as well as intra- and inter-connectivity for reservoir compartments are revised accordingly. Consequently, the 3D and 4D seismic responses of the reservoir models are assimilated with the observations, while the production match to the historical data is also improved . Overall, the proposed seis2sim and CtL workflows show a progression in the quantitative updating of the reservoir models using time-lapse seismic data

    Adequate model complexity and data resolution for effective constraint of simulation models by 4D seismic data

    Get PDF
    4D seismic data bears valuable spatial information about production-related changes in the reservoir. It is a challenging task though to make simulation models honour it. Strict spatial tie of seismic data requires adequate model complexity in order to assimilate details of seismic signature. On the other hand, not all the details in the seismic signal are critical or even relevant to the flow characteristics of the simulation model so that fitting them may compromise the predictive capability of models. So, how complex should be a model to take advantage of information from seismic data and what details should be matched? This work aims to show how choices of parameterisation affect the efficiency of assimilating spatial information from the seismic data. Also, the level of details at which the seismic signal carries useful information for the simulation model is demonstrated in light of the limited detectability of events on the seismic map and modelling errors. The problem of the optimal model complexity is investigated in the context of choosing model parameterisation which allows effective assimilation of spatial information in the seismic map. In this study, a model parameterisation scheme based on deterministic objects derived from seismic interpretation creates bias for model predictions which results in poor fit of historic data. The key to rectifying the bias was found to be increasing the flexibility of parameterisation by either increasing the number of parameters or using a scheme that does not impose prior information incompatible with data such as pilot points in this case. Using the history matching experiments with a combined dataset of production and seismic data, a level of match of the seismic maps is identified which results in an optimal constraint of the simulation models. Better constrained models were identified by quality of their forecasts and closeness of the pressure and saturation state to the truth case. The results indicate that a significant amount of details in the seismic maps is not contributing to the constructive constraint by the seismic data which is caused by two factors. First is that smaller details are a specific response of the system-source of observed data, and as such are not relevant to flow characteristics of the model, and second is that the resolution of the seismic map itself is limited by the seismic bandwidth and noise. The results suggest that the notion of a good match for 4D seismic maps commonly equated to the visually close match is not universally applicable

    TRA of DigiMon components

    Get PDF
    The DigiMon project aims to develop an affordable, flexible, societally embedded and smart monitoring system for industrial scale subsurface CO2 storage. For this purpose, the DigiMon system is to combine various types of measurements in integrated workflows. In this report, we describe the process of conducting the Technology Readiness Assessment (TRA) of various measurement techniques. We report on the identification, description and assessment of these measurement techniques as Critical Technology Elements (CTEs) being part of the DigiMon system

    Permeability estimation from time-lapse seismic data for updating the flow-simulation model

    Get PDF
    The key to increasing reservoir recovery is to provide accurate estimates of the permeable pathways (permeability, transmissibility) and the transmissibility of the barriers that control reservoir heterogeneity. The reservoir-engineering techniques (such as well testing, well logging and production data) supply the estimate of these properties in the reservoir region which is limited to well locations. Providing estimates of the permeability in the reservoir rocks located between the wells is the holy grail of reservoir engineering for history matching. Compared with all other engineering techniques, 4D seismic could play a unique role in providing the property of the reservoir at a good spatial coverage. In this thesis, the estimation of permeability, transmissibility, and the transmissibility multiplier, using 4D seismic, is addressed. First, current methodologies for permeability estimation were applied in synthetic and field examples. Based on the investigations performed, the permeability-estimation method was modified and adjusted to produce an improved result. Consequently, the estimates of permeability provided an introduction to the fast-track history-matching method. The proposed history-matching technique implies a simple and practical approach for quickly updating the simulation to improve the history-matching in the model. In following, the assessment of the uncertainties associated with the permeability estimation that involves using a variety of different attributes, using different time-lapse surveys, tuning effects and method assumptions, were performed. The uncertainties were tackled by addressing these issues; thus, the permeability result was further enhanced, and the uncertainty associated with the estimates was quantified. Next, the relationships between the quantitative estimates of connectivity and the 4D seismic signal were established. Two types of connectivity assessments using 4D seismic (hydraulic sand connectivity and barrier connectivity) were proposed, depending on the fact that 4D-seismic information is either pressure- or saturation-dominant. Accordingly, two types of attributes were introduced, the seismic connectivity attribute (SCA) and the Laplacian attribute. When applied to the Schiehallion field data, an interpretation approach is used to interpret pressure- and saturation-anomalies in frequent time-lapse seismic, using all available sources of data. Following this, a pressure-anomaly map is utilized for locating faults and compartments iii (using the Laplacian attribute), and a saturation-anomaly map is used to calculate the SCA. New approaches were chosen for estimating transmissibility and transmissibility multipliers, based on proposed attributes extracted from 4D seismic

    Faster convergence in seismic history matching by dividing and conquering the unknowns

    Get PDF
    The aim in reservoir management is to control field operations to maximize both the short and long term recovery of hydrocarbons. This often comprises continuous optimization based on reservoir simulation models when the significant unknown parameters have been updated by history matching where they are conditioned to all available data. However, history matching of what is usually a high dimensional problem requires expensive computer and commercial software resources. Many models are generated, particularly if there are interactions between the properties that update and their effects on the misfit that measures the difference between model predictions to observed data. In this work, a novel 'divide and conquer' approach is developed to the seismic history matching method which efficiently searches for the best values of uncertain parameters such as barrier transmissibilities, net:gross, and permeability by matching well and 4D seismic predictions to observed data. The ‘divide’ is carried by applying a second order polynomial regression analysis to identify independent sub-volumes of the parameters hyperspace. These are then ‘conquered’ by searching separately but simultaneously with an adapted version of the quasi-global stochastic neighbourhood algorithm. This 'divide and conquer' approach is applied to the seismic history matching of the Schiehallion field, located on the UK continental shelf. The field model, supplied by the operator, contained a large number of barriers that affect flow at different times during production, and their transmissibilities were largely unknown. There was also some uncertainty in the petrophysical parameters that controlled permeability and net:gross. Application of the method was accomplished because it is found that the misfit function could be successfully represented as sub-misfits each dependent on changes in a smaller number of parameters which then could be searched separately but simultaneously. Ultimately, the number of models required to find a good match reduced by an order of magnitude. Experimental design was used to contribute to the efficiency and the ‘divide and conquer’ approach was also able to separate the misfit on a spatial basis by using time-lapse seismic data in the misfit. The method has effectively gained a greater insight into the reservoir behaviour and has been able to predict flow more accurately with a very efficient 'divide and conquer' approach

    Machine Learning for Seismic Exploration: where are we and how far are we from the Holy Grail?

    Get PDF
    Machine Learning (ML) applications in seismic exploration are growing faster than applications in other industry fields, mainly due to the large amount of acquired data for the exploration industry. The ML algorithms are constantly being implemented to almost all the steps involved in seismic processing and interpretation workflow, mainly for automation, processing time reduction, efficiency and in some cases for improving the results. We carried out a literature-based analysis of existing ML-based seismic processing and interpretation published in SEG and EAGE literature repositories and derived a detailed overview of the main ML thrusts in different seismic applications. For each publication, we extracted various metadata about ML implementations and performances. The data indicate that current ML implementations in seismic exploration are focused on individual tasks rather than a disruptive change in processing and interpretation workflows. The metadata shows that the main targets of ML applications for seismic processing are denoising, velocity model building and first break picking, whereas for seismic interpretation, they are fault detection, lithofacies classification and geo-body identification. Through the metadata available in publications, we obtained indices related to computational power efficiency, data preparation simplicity, real data test rate of the ML model, diversity of ML methods, etc. and we used them to approximate the level of efficiency, effectivity and applicability of the current ML-based seismic processing and interpretation tasks. The indices of ML-based processing tasks show that current ML-based denoising and frequency extrapolation have higher efficiency, whereas ML-based QC is more effective and applicable compared to other processing tasks. Among the interpretation tasks, ML-based impedance inversion shows high efficiency, whereas high effectivity is depicted for fault detection. ML-based Lithofacies classification, stratigraphic sequence identification and petro/rock properties inversion exhibit high applicability among other interpretation tasks

    Pressure and saturation estimation from PRM time-lapse seismic data for a compacting reservoir

    Get PDF
    Observed 4D effects are influenced by a combination of changes in both pressure and saturation in the reservoir. Decomposition of pressure and saturation changes is crucial to explain the different physical variables that have contributed to the 4D seismic responses. This thesis addresses the challenges of pressure and saturation decomposition from such time-lapse seismic data in a compacting chalk reservoir. The technique employed integrates reservoir engineering concepts and geophysical knowledge. The innovation in this methodology is the ability to capture the complicated water weakening behaviour of the chalk as a non-linear proxy model controlled by only three constants. Thus, changes in pressure and saturation are estimated via a Bayesian inversion by employing compaction curves derived from the laboratory, constraints from the simulation model predictions, time strain information and the observed fractional change in and . The approach is tested on both synthetic and field data from the Ekofisk field in the North Sea. The results are in good agreement with well production data, and help explain strong localized anomalies in both the Ekofisk and Tor formations. These results also suggest updates to the reservoir simulation model. The second part of the thesis focuses on the geomechanics of the overburden, and the opportunity to use time-lapse time-shifts to estimate pore pressure changes in the reservoir. To achieve this, a semi-analytical approach by Geertsma is used, which numerically integrates the displacements from a nucleus of strain. This model relates the overburden time-lapse time-shifts to reservoir pressure. The existing method by Hodgson (2009) is modified to estimate reservoir pressure change and also the average dilation factor or R-factor for both the reservoir and overburden. The R-factors can be quantified when prior constraints are available from a well history matched simulation model, and their uncertainty defined. The results indicate that the magnitude of R is a function of strain change polarity, and that this asymmetry is required to match the observed timeshifts. The recovered average R-factor is 16, using the permanent reservoir monitoring (PRM) data. The streamer data has recovered average R-factors in the range of 7.2 to 18.4. Despite the limiting assumptions of a homogeneous medium, the method is beneficial, as it treats arbitrary subsurface geometries, and, in contrast to the complex numerical approaches, it is simple to parameterise and computationally fast. Finally, the aim and objective of this research have been met predominantly by the use of PRM data. These applications could not have been achieved without such highly repeatable and short repeat period acquisitions. This points to the value in using these data in reservoir characterisation, inversion and history matching

    Improving the convergence rate of seismic history matching with a proxy derived method to aid stochastic sampling

    Get PDF
    History matching is a very important activity during the continued development and management of petroleum reservoirs. Time-lapse (4D) seismic data provide information on the dynamics of fluids in reservoirs, relating variations of seismic signal to saturation and pressure changes. This information can be integrated with history matching to improve convergence towards a simulation model that predicts available data. The main aim of this thesis is to develop a method to speed up the convergence rate of assisted seismic history matching using proxy derived gradient method. Stochastic inversion algorithms often rely on simple assumptions for selecting new models by random processes. In this work, we improve the way that such approaches learn about the system they are searching and thus operate more efficiently. To this end, a new method has been developed called NA with Proxy derived Gradients (NAPG). To improve convergence, we use a proxy model to understand how parameters control the misfit and then use a global stochastic method with these sensitivities to optimise the search of the parameter space. This leads to an improved set of final reservoir models. These in turn can be used more effectively in reservoir management decisions. To validate the proposed approach, we applied the new approach on a number of analytical functions and synthetic cases. In addition, we demonstrate the proposed method by applying it to the UKCS Schiehallion field. The results show that the new method speeds up the rate of convergence by a factor of two to three generally. The performance of NAPG is much improved by updating the regression equation coefficients instead of keeping it fixed. In addition, we found that the initial number of models to start NAPG or NA could be reduced by using Experimental Design instead of using random initialization. Ultimately, with all of these approaches combined, the number of models required to find a good match reduced by an order of magnitude. We have investigated the criteria for stopping the SHM loop, particularly the use of a proxy model to help. More research is needed to complete this work but the approach is promising. Quantifying parameter uncertainty using NA and NAPG was studied using the NA-Bayes approach (NAB). We found that NAB is very sensitive to misfit magnitude but otherwise NA and NAPG produce similar uncertainty measures
    • …
    corecore