50 research outputs found

    Constrained non-linear AVO inversion based on the adjoint-state optimization

    Get PDF
    Pre-stack AVO inversion of seismic data is a modeling tool for estimating subsurface elastic properties. Our focus is on the model-based inversion method where then unknown variables are estimated by minimizing the misfit to the observed data. Standard approaches for non-linear AVO inversion are based on the gradient descent optimization algorithms that require the calculation of the gradient equations of the objective function. To improve the accuracy and efficiency of these methods, we developed a technique that uses an implementation of the adjoint-state-based gradient computation. The inversion algorithm relies on three basic modeling components consisting of a convolution-based forward model using a linearized approximation of the Zoeppritz equation, the definition of the objective function, and the adjoint-computed gradient. To achieve an accurate solution, we choose a second-order optimization algorithm known as the Limited memory-BFGS (L-BFGS) that implicitly approximates the inverse Hessian matrix. This approach is more efficient than traditional optimization methods. The main novelty of the proposed approach is the derivation of the adjoint-state equations for the gradient of the objective function. The application of the proposed method is demonstrated using 1D and 2D synthetic datasets based on data from the Edvard Grieg oil field. The seismic data for these applications is generated by using both convolutional modeling and finite difference methods. The results of the proposed method are accurate and the computational approach is efficient. The results show that the algorithm reliably retrieves the elastic variables, P- and S-wave velocities and density for both convolutional and finite difference models.publishedVersio

    Adequate model complexity and data resolution for effective constraint of simulation models by 4D seismic data

    Get PDF
    4D seismic data bears valuable spatial information about production-related changes in the reservoir. It is a challenging task though to make simulation models honour it. Strict spatial tie of seismic data requires adequate model complexity in order to assimilate details of seismic signature. On the other hand, not all the details in the seismic signal are critical or even relevant to the flow characteristics of the simulation model so that fitting them may compromise the predictive capability of models. So, how complex should be a model to take advantage of information from seismic data and what details should be matched? This work aims to show how choices of parameterisation affect the efficiency of assimilating spatial information from the seismic data. Also, the level of details at which the seismic signal carries useful information for the simulation model is demonstrated in light of the limited detectability of events on the seismic map and modelling errors. The problem of the optimal model complexity is investigated in the context of choosing model parameterisation which allows effective assimilation of spatial information in the seismic map. In this study, a model parameterisation scheme based on deterministic objects derived from seismic interpretation creates bias for model predictions which results in poor fit of historic data. The key to rectifying the bias was found to be increasing the flexibility of parameterisation by either increasing the number of parameters or using a scheme that does not impose prior information incompatible with data such as pilot points in this case. Using the history matching experiments with a combined dataset of production and seismic data, a level of match of the seismic maps is identified which results in an optimal constraint of the simulation models. Better constrained models were identified by quality of their forecasts and closeness of the pressure and saturation state to the truth case. The results indicate that a significant amount of details in the seismic maps is not contributing to the constructive constraint by the seismic data which is caused by two factors. First is that smaller details are a specific response of the system-source of observed data, and as such are not relevant to flow characteristics of the model, and second is that the resolution of the seismic map itself is limited by the seismic bandwidth and noise. The results suggest that the notion of a good match for 4D seismic maps commonly equated to the visually close match is not universally applicable

    Adjoint-state method for seismic AVO inversion and time-lapse monitoring

    Get PDF
    This dissertation presents seismic amplitude versus offset (AVO) inversion methods to estimate water saturation and effective pressure quantitatively in elastic and viscoelastic media. Quantitative knowledge of the saturation and pore pressure properties from pre- or post-production seismic measurements for reservoir static or dynamic modeling has been an area of interest for the geophysical community for decades. However, the focus on the existing inversion methodologies and explicit expressions to estimate saturation-pressure variables or changes in these properties due to production or fluid injection has been based on elastic AVO models. These conventional methods do not consider the seismic wave attenuation effects on the reflection amplitudes and therefore can result in biased prediction. Numerous theoretical rock physics models and laboratory experiments have demonstrated the sensitivity of various petrophysical and seismic properties of partially fluid-filled porous media to seismic attenuation. This makes seismic wave attenuation a valuable time-lapse attribute to reliably measure the saturation (Sw) and effective pressure (Pe) properties. Therefore, in this work, I have developed two AVO inversion processes i.e., the conventional AVO inversion method for elastic media and the frequency-dependent amplitude versus offset (FAVO) inversion technique for the viscoelastic media. This dissertation first presents the inversion strategies to invert the pre-stack seismic data for the seismic velocities and density by using the conventional AVO equation and for the seismic velocities, density, and Q-factors by using the frequency-dependent AVO method. These inversion methods are then extended to estimate the dynamic reservoir changes e.g., saturation and pressure variables, and can be applied to predict the saturation and pressure variables at any stage e.g., before and during production, or fluid injection, or to estimate the changes in saturation (ΔSw) and pressure (ΔPe). The first part of the dissertation describes the theory and formulation of the elastic AVO inversion method while in the second half, I have described the viscoelastic inversion workflow. FAVO technique accounts for the dependence of reflection amplitudes on incident angles as well as seismic frequencies and P and S waves attenuation in addition to seismic velocities and density. The fluid saturation and pressure in the elastic and inelastic mediums are linked to the reflection amplitude through seismic velocities, density, and quality factors (Q). The inversion process is based on the gradient-descent method in which the least-square differentiable data misfit equation is minimized by using a non-linear limitedmemory BFGS method. The gradients of the misfit function with respect to unknown model variables are derived by using the adjoint-state method and the multivariable chain rule of derivative. The adjoint-state method provides an efficient and accurate way to calculate the misfit gradients. Numerous rock physics models e.g., the Gassmann substitution equation with uniform and patchy fluid distribution patterns, modified MacBeth’s relations of dry rock moduli with effective pressure, and constant Q models for the P and S wave attenuation are applied to relate the saturation and effective pressure variables with elastic and an-elastic properties and then forward reflectivity operator. These inversion methods have been defined as constrained problems wherein the constraints are applied e.g., bound constraints, constraints in the Lagrangian solution, and Tikhonov regularization. These inversion methods are quite general and can be extended for other rock physics models through parameterizations. The applications of the elastic AVO and the FAVO methods are tested on various 1D synthetic datasets simulated under different oil production (4D) scenarios. The inversion methods are further applied to a 2D realistic reservoir model extracted from the 3D Smeaheia Field, a potential storage site for the CO2 injection. The inversion schemes successfully estimate not only the static saturation and effective pressure variables or changes in these properties due to oil production or CO2 injection but also provide a very good prediction of seismic velocities, density, and seismic attenuation (quantified as the inverse quality factor). The partially CO2-saturated reservoir exhibits higher P wave attenuation, therefore, the addition of time-lapse P wave attenuation due to viscous friction between CO2-water patches helps to reduce the errors in the inverted CO2/water saturation variables as compared to the elastic 4D AVO inversion. This research work has a wide range of applications from the oil industry to carbon capture and storage (CCS) monitoring tools aiming to provide control and safety during the injection. The uncertainty in the inversion results is quantified as a function of the variability of the prior models obtained by using Monte Carlo simulation

    Improving the convergence rate of seismic history matching with a proxy derived method to aid stochastic sampling

    Get PDF
    History matching is a very important activity during the continued development and management of petroleum reservoirs. Time-lapse (4D) seismic data provide information on the dynamics of fluids in reservoirs, relating variations of seismic signal to saturation and pressure changes. This information can be integrated with history matching to improve convergence towards a simulation model that predicts available data. The main aim of this thesis is to develop a method to speed up the convergence rate of assisted seismic history matching using proxy derived gradient method. Stochastic inversion algorithms often rely on simple assumptions for selecting new models by random processes. In this work, we improve the way that such approaches learn about the system they are searching and thus operate more efficiently. To this end, a new method has been developed called NA with Proxy derived Gradients (NAPG). To improve convergence, we use a proxy model to understand how parameters control the misfit and then use a global stochastic method with these sensitivities to optimise the search of the parameter space. This leads to an improved set of final reservoir models. These in turn can be used more effectively in reservoir management decisions. To validate the proposed approach, we applied the new approach on a number of analytical functions and synthetic cases. In addition, we demonstrate the proposed method by applying it to the UKCS Schiehallion field. The results show that the new method speeds up the rate of convergence by a factor of two to three generally. The performance of NAPG is much improved by updating the regression equation coefficients instead of keeping it fixed. In addition, we found that the initial number of models to start NAPG or NA could be reduced by using Experimental Design instead of using random initialization. Ultimately, with all of these approaches combined, the number of models required to find a good match reduced by an order of magnitude. We have investigated the criteria for stopping the SHM loop, particularly the use of a proxy model to help. More research is needed to complete this work but the approach is promising. Quantifying parameter uncertainty using NA and NAPG was studied using the NA-Bayes approach (NAB). We found that NAB is very sensitive to misfit magnitude but otherwise NA and NAPG produce similar uncertainty measures

    Multi-objective optimisation metrics for combining seismic and production data in automated reservoir history matching

    Get PDF
    Information from the time-lapse (4D) seismic data can be integrated with those from producing wells to calibrate reservoir models. 4D seismic data provides valuable information at high spatial resolution while the information provided by the production data are at high temporal resolution. However, combining the two data sources can be challenging as they are often conflicting. In addition, information from production wells themselves are often correlated and can also be conflicting especially in reservoirs of complex geology. This study will examine alternative approaches to integrating data of different sources in the automatic history matching loop. The study will focus on using multiple-objective methods in history matching to identify those that are most appropriate for the data available. The problem of identifying suitable metrics for comparing data is investigated in the context of data assimilation, formulation of objective functions, optimisation methods and parameterisation scheme. Traditional data assimilation based on global misfit functions or weighted multi-objective functions create bias which result in predictions from some areas of the model having a good fit to the data and others having very poor fit. The key to rectifying the bias was found in the approaches proposed in this study which are based on the concept of dominance. A new set of algorithms called the Dynamic Screening of Fronts in Multiobjective Optimisation (DSFMO) has been developed which enables the handling of many objectives in multi-objective fashion. With DSFMO approach, several options for selecting models for next iteration are studied and their performance appraised using different analytical functions of many objectives and parameters. The proposed approaches are also tested and validated by applying them to some synthetic reservoir models. DSFMO is then implemented in resolving the problem of many conflicting objectives in the seismic and production history matching of the Statoil Norne Field. Compared to the traditional stochastic approaches, results show that DSFMO yield better data-fitting models that reflect the uncertainty in model predictions. We also investigated the use of experimental design techniques in calibrating proxy models and suggested ways of improving the quality of proxy models in history matching. We thereafter proposed a proxy-based approach for model appraisal and uncertainty assessment in Bayesian context. We found that Markov Chain Monte Carlo resampling with the proxy model takes minutes instead of hours

    Three-dimensional anisotropic full-waveform inversion

    No full text
    Full-waveform inversion (FWI) is a powerful nonlinear tool for quantitative estimation of high-resolution high-fidelity models of subsurface seismic parameters, typically P-wave velocity. A solution is obtained via a series of iterative local linearised updates to a start model, requiring this model to lie within the basin of attraction of the solution space’s global minimum. The consideration of seismic anisotropy during FWI is vital, as it holds influence over both the kinematics and dynamics of seismic waveforms. If not appropriately taken into account, then inadequacies in the anisotropy model are likely to manifest as significant error in the recovered velocity model. Conventionally, anisotropic FWI either employs an a priori anisotropy model, held fixed during FWI, or uses a local inversion scheme to recover anisotropy as part of FWI; both of these methods can be problematic. Constructing an anisotropy model prior to FWI often involves intensive (and hence expensive) iterative procedures. On the other hand, introducing multiple parameters to FWI itself increases the complexity of what is already an underdetermined problem. As an alternative I propose here a novel approach referred to as combined FWI. This uses a global inversion for long-wavelength acoustic anisotropy, involving no start model, while simultaneously updating P-wave velocity using mono-parameter local FWI. Combined FWI is then followed by multi-parameter local FWI to recover the detailed final model. To validate the combined FWI scheme, I evaluate its performance with several 2D synthetic datasets, and apply it to a full 3D field dataset. The synthetic results establish the combined FWI, as part of a two-stage workflow, as more accurate than an equivalent conventional workflow. The solution obtained from the field data reconciles well with in situ borehole measurements. Although combined FWI includes a global inversion, I demonstrate that it is nonetheless affordable and commercially practical for 3D field data.Open Acces

    Seismic Inversion and Uncertainty Analysis using Transdimensional Markov Chain Monte Carlo Method

    Get PDF
    We use a transdimensional inversion algorithm, reversible jump MCMC (rjMCMC), in the seismic waveform inversion of post-stack and prestack data to characterize reservoir properties such as seismic wave velocity, density as well as impedance and then estimate uncertainty. Each seismic trace is inverted independently based on a layered earth model. The model dimensionality is defined as the number of the layers multiplied with the number of model parameters per layer. The rjMCMC is able to infer the number of model parameters from data itself by allowing it to vary in the iterative inversion process, converge to proper parameterization and prevent underparameterization and overparameterization. We also use rjMCMC to enhance uncertainty estimation since it can transdimensionally sample different model spaces of different dimensionalities and can prevent a biased sampling in only one space which may have a different dimensionality than that of the true model space. An ensemble of solutions from difference spaces can statistically reduce the bias for parameter estimation and uncertainty quantification. Inversion uncertainty is comprised of property uncertainty and location uncertainty. Our study revealed that the inversion uncertainty is correlated with the discontinuity of property in such a way that 1) a smaller discontinuity will induce a lower uncertainty in property at the discontinuity but also a higher uncertainty of the location of that discontinuity and 2) a larger discontinuity will induce a higher uncertainty in property at the discontinuity but also a higher ``certainty'' of the location of that discontinuity. Therefore, there is a trade-off between the property uncertainty and the location uncertainty. To our surprise, there is a lot of hidden information in the uncertainty result that we can actually take advantage of due to this trade-off effect. On the basis of our study using rjMCMC, we propose to use the inversion uncertainty as a novel attribute in an optimistic way to characterize the magnitude and the location of subsurface discontinuities and reflectors

    Decimetric-resolution stochastic inversion of shallow marine seismic reflection data: dedicated strategy and application to a geohazard case study

    Get PDF
    Characterization of the top 10–50 m of the subseabed is key for landslide hazard assessment, offshore structure engineering design and underground gas-storage monitoring. In this paper, we present a methodology for the stochastic inversion of ultra-high-frequency (UHF, 0.2–4.0 kHz) pre-stack seismic reflection waveforms, designed to obtain a decimetric-resolution remote elastic characterization of the shallow sediments with minimal pre-processing and little a priori information. We use a genetic algorithm in which the space of possible solutions is sampled by explicitly decoupling the short and long wavelengths of the P-wave velocity model. This approach, combined with an objective function robust to cycle skipping, outperforms a conventional model parametrization when the ground-truth is offset from the centre of the search domain. The robust P-wave velocity model is used to precondition the width of the search range of the multiparameter elastic inversion, thereby improving the efficiency in high-dimensional parametrizations. Multiple independent runs provide a set of independent results from which the reproducibility of the solution can be estimated. In a real data set acquired in Finneidfjord, Norway, we also demonstrate the sensitivity of UHF seismic inversion to shallow subseabed anomalies that play a role in submarine slope stability. Thus, the methodology has the potential to become an important practical tool for marine ground model building in spatially heterogeneous areas, reducing the reliance on expensive and time-consuming coring campaigns for geohazard mitigation in marine areas

    Univariate Financial Time Series Prediction using Clonal Selection Algorithm

    Get PDF
    The ability to predict the financial market is beneficial not only to the individual but also to the organization and country. It is not only beneficial in terms of financial but also in terms of making a short-term and long-term decision. This paper presents an experimental study to perform univariate financial time series prediction using a clonal selection algorithm (CSA). CSA is an optimization algorithm that is based on clonal selection theory. It is a subset of the artificial immune system, a class of evolutionary algorithms inspired by the immune system of a vertebrate. Since CSA is an optimization algorithm, the univariate financial time series prediction problem was modeled into an optimization problem using a weighted regression model. CSA was used to search for the optimal set of weights for the regression model to generate prediction with the lowest error. Three data sets from the financial market were chosen for the experiments of this study namely S&P500 price, Gold price, and EUR-USD exchange rate. The performance of CSA is measured using RMSE. The value of RMSE for a problem is related to the maximum and minimum value of the data set. Therefore, the results were not compared to other data sets. Instead, it is compared to the range of values of the data sets. The result of the experiments shows that CSA can make decent predictions for financial time series despite being inferior to ARIMA. Hence, this finding implies that CSA can be implemented on a univariate financial time series prediction problem given that the problem is modeled as an optimization problem

    Quantitative application of 4D seismic data for updating thin-reservoir models

    Get PDF
    A range of methods which allow quantitative integration of 4D seismic and reservoir simulation are developed. These methods are designed to work with thin reservoirs, where the seismic response is normally treated in a map-based sense due to the limited vertical resolution of seismic. The first group of methods are fast-track procedures for prediction of future saturation fronts, and reservoir permeability estimation. The input to these methods is pressure and saturation maps which are intended to be derived from time-lapse seismic attributes. The procedures employ a streamline representation of the fluid flow, and finite difference discretisation of the flow equations. The underlying ideas are drawn from the literature and merged with some innovative new ideas, particularly for the implementation and use. However my conclusions on the applicability of the methods are different from their literature counterparts, and are more conservative. The fast-track procedures are advantageous in terms of speed compared to history matching techniques, but are lacking coupling between the quantities which describe the reservoir fluid flow: permeabilities, pressures, and saturations. For this reason, these methods are very sensitive to the input noise, and currently cannot be applied to the real dataset with a robust outcome. Seismic history matching is the second major method considered here for integrating 4D seismic data with the reservoir simulation model. Although more computationally demanding, history matching is capable of tolerating high levels of the input noise, and is more readily applicable to the real datasets. The proposed implementation for seismic modelling within the history matching loop is based on a linear regression between the time-lapse seismic attribute maps and the reservoir dynamic parameter maps, thus avoiding the petro-elastic and seismic trace modelling. The idea for such regression is developed from a pressure/saturation inversion approach found in the literature. Testing of the seismic history matching workflow with the associated uncertainty estimation is performed for a synthetic model. A reduction of the forecast uncertainties is observed after addition of the 4D seismic information to the history matching process. It is found that a proper formulation of the covariance matrices for the seismic errors is essential to obtain favourable forecasts which have small levels of bias. Finally, the procedure is applied to a North Sea field dataset where a marginal reduction in the prediction uncertainties is observed for the wells located close to the major seismic anomalies. Overall, it is demonstrated that the proposed seismic history matching technique is capable of integrating 4D seismic data with the simulation model and increasing confidence in the latter
    corecore