36 research outputs found

    Full waveform inversion procedures with irregular topography

    Get PDF
    Full waveform inversion (FWI) is a form of seismic inversion that uses data residual, found as the misfit, between the whole waveform of field acquired and synthesized seismic data, to iteratively update a model estimate until such misfit is sufficiently reduced, indicating synthetic data is generated from a relatively accurate model. The aim of the thesis is to review FWI and provide a simplified explanation of the techniques involved to those who are not familiar with FWI. In FWI the local minima problem causes the misfit to decrease to its nearest minimum and not the global minimum, meaning the model cannot be accurately updated. Numerous objective functions were proposed to tackle different sources of local minima. The ‘joint deconvoluted envelope and phase residual’ misfit function proposed in this thesis aims to combine features of these objective functions for a comprehensive inversion. The adjoint state method is used to generate an updated gradient for the search direction and is followed by a step-length estimation to produce a scalar value that could be applied to the search direction to reduce the misfit more efficiently. Synthetic data are derived from forward modelling involving simulating and recording propagating waves influenced by the mediums’ properties. The ‘generalised viscoelastic wave equation in porous media’ was proposed by the author in sub-chapter 3.2.5 to consider these properties. Boundary layers and conditions are employed to mitigate artificial reflections arising from computational simulations. Linear algebra solvers are an efficient tool that produces wavefield vectors for frequency domain synthetic data. Regions with topography require a grid generation scheme to adjust a mesh of nodes to fit into its non-quadrilateral shaped body. Computational co-ordinate terms are implemented within wave equations throughout topographic models where a single point in the model in physical domain are represented by cartesian nodes in the computational domains. This helps to generate an accurate and appropriate synthetic data, without complex modelling computations. Advanced FWI takes a different approach to conventional FWI, where they relax upon the use of misfit function, however none of their proponents claims the former can supplant the latter but suggest that they can be implemented together to recover the true model.Open Acces

    Improving the convergence rate of seismic history matching with a proxy derived method to aid stochastic sampling

    Get PDF
    History matching is a very important activity during the continued development and management of petroleum reservoirs. Time-lapse (4D) seismic data provide information on the dynamics of fluids in reservoirs, relating variations of seismic signal to saturation and pressure changes. This information can be integrated with history matching to improve convergence towards a simulation model that predicts available data. The main aim of this thesis is to develop a method to speed up the convergence rate of assisted seismic history matching using proxy derived gradient method. Stochastic inversion algorithms often rely on simple assumptions for selecting new models by random processes. In this work, we improve the way that such approaches learn about the system they are searching and thus operate more efficiently. To this end, a new method has been developed called NA with Proxy derived Gradients (NAPG). To improve convergence, we use a proxy model to understand how parameters control the misfit and then use a global stochastic method with these sensitivities to optimise the search of the parameter space. This leads to an improved set of final reservoir models. These in turn can be used more effectively in reservoir management decisions. To validate the proposed approach, we applied the new approach on a number of analytical functions and synthetic cases. In addition, we demonstrate the proposed method by applying it to the UKCS Schiehallion field. The results show that the new method speeds up the rate of convergence by a factor of two to three generally. The performance of NAPG is much improved by updating the regression equation coefficients instead of keeping it fixed. In addition, we found that the initial number of models to start NAPG or NA could be reduced by using Experimental Design instead of using random initialization. Ultimately, with all of these approaches combined, the number of models required to find a good match reduced by an order of magnitude. We have investigated the criteria for stopping the SHM loop, particularly the use of a proxy model to help. More research is needed to complete this work but the approach is promising. Quantifying parameter uncertainty using NA and NAPG was studied using the NA-Bayes approach (NAB). We found that NAB is very sensitive to misfit magnitude but otherwise NA and NAPG produce similar uncertainty measures

    Faster convergence in seismic history matching by dividing and conquering the unknowns

    Get PDF
    The aim in reservoir management is to control field operations to maximize both the short and long term recovery of hydrocarbons. This often comprises continuous optimization based on reservoir simulation models when the significant unknown parameters have been updated by history matching where they are conditioned to all available data. However, history matching of what is usually a high dimensional problem requires expensive computer and commercial software resources. Many models are generated, particularly if there are interactions between the properties that update and their effects on the misfit that measures the difference between model predictions to observed data. In this work, a novel 'divide and conquer' approach is developed to the seismic history matching method which efficiently searches for the best values of uncertain parameters such as barrier transmissibilities, net:gross, and permeability by matching well and 4D seismic predictions to observed data. The ‘divide’ is carried by applying a second order polynomial regression analysis to identify independent sub-volumes of the parameters hyperspace. These are then ‘conquered’ by searching separately but simultaneously with an adapted version of the quasi-global stochastic neighbourhood algorithm. This 'divide and conquer' approach is applied to the seismic history matching of the Schiehallion field, located on the UK continental shelf. The field model, supplied by the operator, contained a large number of barriers that affect flow at different times during production, and their transmissibilities were largely unknown. There was also some uncertainty in the petrophysical parameters that controlled permeability and net:gross. Application of the method was accomplished because it is found that the misfit function could be successfully represented as sub-misfits each dependent on changes in a smaller number of parameters which then could be searched separately but simultaneously. Ultimately, the number of models required to find a good match reduced by an order of magnitude. Experimental design was used to contribute to the efficiency and the ‘divide and conquer’ approach was also able to separate the misfit on a spatial basis by using time-lapse seismic data in the misfit. The method has effectively gained a greater insight into the reservoir behaviour and has been able to predict flow more accurately with a very efficient 'divide and conquer' approach

    Advances in Methane Production from Coal, Shale and Other Tight Rocks

    Get PDF
    This collection reports on the state of the art in fundamental discipline application in hydrocarbon production and associated challenges in geoengineering activities. Zheng et al. (2022) report an NMR-based method for multiphase methane characterization in coals. Wang et al. (2022) studied the genesis of bedding fractures in Ordovician to Silurian marine shale in the Sichuan basin. Kang et al. (2022) proposed research focusing on the prediction of shale gas production from horizontal wells. Liang et al. (2022) studied the pore structure of marine shale by adsorption method in terms of molecular interaction. Zhang et al. (2022) focus on the coal measures sandstones in the Xishanyao Formation, southern Junggar Basin, and the sandstone diagenetic characteristics are fully revealed. Yao et al. (2022) report the source-to-sink system in the Ledong submarine channel and the Dongfang submarine fan in the Yinggehai Basin, South China Sea. There are four papers focusing on the technologies associated with hydrocarbon productions. Wang et al. (2022) reported the analysis of pre-stack inversion in a carbonate karst reservoir. Chen et al. (2022) conducted an inversion study on the parameters of cascade coexisting gas-bearing reservoirs in coal measures in Huainan. To ensure the safety CCS, Zhang et al (2022) report their analysis of available conditions for InSAR surface deformation monitoring. Additionally, to ensure production safety in coal mines, Zhang et al. (2022) report the properties and application of gel materials for coal gangue control

    Quantitative application of 4D seismic data for updating thin-reservoir models

    Get PDF
    A range of methods which allow quantitative integration of 4D seismic and reservoir simulation are developed. These methods are designed to work with thin reservoirs, where the seismic response is normally treated in a map-based sense due to the limited vertical resolution of seismic. The first group of methods are fast-track procedures for prediction of future saturation fronts, and reservoir permeability estimation. The input to these methods is pressure and saturation maps which are intended to be derived from time-lapse seismic attributes. The procedures employ a streamline representation of the fluid flow, and finite difference discretisation of the flow equations. The underlying ideas are drawn from the literature and merged with some innovative new ideas, particularly for the implementation and use. However my conclusions on the applicability of the methods are different from their literature counterparts, and are more conservative. The fast-track procedures are advantageous in terms of speed compared to history matching techniques, but are lacking coupling between the quantities which describe the reservoir fluid flow: permeabilities, pressures, and saturations. For this reason, these methods are very sensitive to the input noise, and currently cannot be applied to the real dataset with a robust outcome. Seismic history matching is the second major method considered here for integrating 4D seismic data with the reservoir simulation model. Although more computationally demanding, history matching is capable of tolerating high levels of the input noise, and is more readily applicable to the real datasets. The proposed implementation for seismic modelling within the history matching loop is based on a linear regression between the time-lapse seismic attribute maps and the reservoir dynamic parameter maps, thus avoiding the petro-elastic and seismic trace modelling. The idea for such regression is developed from a pressure/saturation inversion approach found in the literature. Testing of the seismic history matching workflow with the associated uncertainty estimation is performed for a synthetic model. A reduction of the forecast uncertainties is observed after addition of the 4D seismic information to the history matching process. It is found that a proper formulation of the covariance matrices for the seismic errors is essential to obtain favourable forecasts which have small levels of bias. Finally, the procedure is applied to a North Sea field dataset where a marginal reduction in the prediction uncertainties is observed for the wells located close to the major seismic anomalies. Overall, it is demonstrated that the proposed seismic history matching technique is capable of integrating 4D seismic data with the simulation model and increasing confidence in the latter

    Identification of an appropriate data assimilation approach in seismic history matching and its effect on prediction uncertainty

    Get PDF
    Reservoir management may be improved if the present state of the field is known and if changes can be predicted. The former requires information about current fluid sweep and pressure change, while the latter requires accurate reservoir description and a predictive tool such as a simulation model. With this information, important decisions can then be made, including facility maintenance and well optimisation. We apply an automated history matching method which updates a parameter such as permeability, barrier transmissibilities and NTG (Net:Gross) by matching 4D seismic predictions from the simulations to observed data. Firstly, we look at the choice of starting model in the history matching process by testing our parameterisation and updating scheme to see whether it can convert a realisation into a better representation resembling reality. We set up some synthetic test cases to validate the history matching and parameterisation scheme. We find that, if we use a pilot point separation that is equivalent to the range of the variogram used in a generation of permeability distributions, we can obtain a good representation of the model. Secondly, we investigate the impact of successively updating barriers by adding new data to our observed dataset and comparing this to a single history match where all data is used. We demonstrate the method by applying it to the UKCS Schiehallion reservoir. We update an upscaled version of the operator’s model for increased speed. We consider a number of parameters to be uncertain, including barrier transmissibilities. Our results show a good match to the observed seismic and dynamic well data with significant improvement to the base case. The best result occurs when early data is used in short simulations first as we learn about optimum parameter values. Later data may be added for fine tuning or to explore new parameters. We investigate the value of seismic data in reducing forecasting uncertainty. The aim here is to look at the reduced uncertainty that we obtain in Schiehallion when we add 4D seismic to the history matching procedure. We look at the change to parameters and then take some of the best models and predict the behaviour of an in-fill well. We quantify the accuracy of history match predictions and the impact of time-lapse seismic data.Akakus Oil Operation

    Long-Short-Term Memory in Active Wavefield Geophysical Methods

    Get PDF
    The PhD thesis discusses the application of Long Short-Term Memory (LSTM) networks in active wavefield geophysical methods. In this work we emphasizes the advantages of Deep Learning (DL) techniques in geophysics, such as improved accuracy, handling complex datasets, and reducing subjectivity. The work explores the suitability of LSTM networks compared to Convolutional Neural Networks (CNNs) in some geophysical applications. The research aims to comprehensively investigate the strengths, limitations, and potential of recurrent neurons, particularly LSTM, in active wavefield geophysics. LSTM networks have the ability to capture temporal dependencies and are well-suited for analyzing geophysical data with non-stationary behavior. They can process both time and frequency domain information, making them valuable for analyzing Seismic and Ground Penetrating Radar (GPR) data. The PhD thesis consists of five main chapters covering methodological development, regression, classification, data fusion, and frequency domain signal processing.The PhD thesis discusses the application of Long Short-Term Memory (LSTM) networks in active wavefield geophysical methods. In this work we emphasizes the advantages of Deep Learning (DL) techniques in geophysics, such as improved accuracy, handling complex datasets, and reducing subjectivity. The work explores the suitability of LSTM networks compared to Convolutional Neural Networks (CNNs) in some geophysical applications. The research aims to comprehensively investigate the strengths, limitations, and potential of recurrent neurons, particularly LSTM, in active wavefield geophysics. LSTM networks have the ability to capture temporal dependencies and are well-suited for analyzing geophysical data with non-stationary behavior. They can process both time and frequency domain information, making them valuable for analyzing Seismic and Ground Penetrating Radar (GPR) data. The PhD thesis consists of five main chapters covering methodological development, regression, classification, data fusion, and frequency domain signal processing

    Methods for Bayesian inversion of seismic data

    Get PDF
    The purpose of Bayesian seismic inversion is to combine information derived from seismic data and prior geological knowledge to determine a posterior probability distribution over parameters describing the elastic and geological properties of the subsurface. Typically the subsurface is modelled by a cellular grid model containing thousands or millions of cells within which these parameters are to be determined. Thus such inversions are computationally expensive due to the size of the parameter space (being proportional to the number of grid cells) over which the posterior is to be determined. Therefore, in practice approximations to Bayesian seismic inversion must be considered. A particular, existing approximate workflow is described in this thesis: the so-called two-stage inversion method explicitly splits the inversion problem into elastic and geological inversion stages. These two stages sequentially estimate the elastic parameters given the seismic data, and then the geological parameters given the elastic parameter estimates, respectively. In this thesis a number of methodologies are developed which enhance the accuracy of this approximate workflow. To reduce computational cost, existing elastic inversion methods often incorporate only simplified prior information about the elastic parameters. Thus a method is introduced which transforms such results, obtained using prior information specified using only two-point geostatistics, into new estimates containing sophisticated multi-point geostatistical prior information. The method uses a so-called deep neural network, trained using only synthetic instances (or `examples') of these two estimates, to apply this transformation. The method is shown to improve the resolution and accuracy (by comparison to well measurements) of elastic parameter estimates determined for a real hydrocarbon reservoir. It has been shown previously that so-called mixture density network (MDN) inversion can be used to solve geological inversion analytically (and thus very rapidly and efficiently) but only under certain assumptions about the geological prior distribution. A so-called prior replacement operation is developed here, which can be used to relax these requirements. It permits the efficient MDN method to be incorporated into general stochastic geological inversion methods which are free from the restrictive assumptions. Such methods rely on the use of Markov-chain Monte-Carlo (MCMC) sampling, which estimate the posterior (over the geological parameters) by producing a correlated chain of samples from it. It is shown that this approach can yield biased estimates of the posterior. Thus an alternative method which obtains a set of non-correlated samples from the posterior is developed, avoiding the possibility of bias in the estimate. The new method was tested on a synthetic geological inversion problem; its results compared favourably to those of Gibbs sampling (a MCMC method) on the same problem, which exhibited very significant bias. The geological prior information used in seismic inversion can be derived from real images which bear similarity to the geology anticipated within the target region of the subsurface. Such so-called training images are not always available from which this information (in the form of geostatistics) may be extracted. In this case appropriate training images may be generated by geological experts. However, this process can be costly and difficult. Thus an elicitation method (based on a genetic algorithm) is developed here which obtains the appropriate geostatistics reliably and directly from a geological expert, without the need for training images. 12 experts were asked to use the algorithm (individually) to determine the appropriate geostatistics for a physical (target) geological image. The majority of the experts were able to obtain a set of geostatistics which were consistent with the true (measured) statistics of the target image

    Three-dimensional anisotropic full-waveform inversion

    No full text
    Full-waveform inversion (FWI) is a powerful nonlinear tool for quantitative estimation of high-resolution high-fidelity models of subsurface seismic parameters, typically P-wave velocity. A solution is obtained via a series of iterative local linearised updates to a start model, requiring this model to lie within the basin of attraction of the solution space’s global minimum. The consideration of seismic anisotropy during FWI is vital, as it holds influence over both the kinematics and dynamics of seismic waveforms. If not appropriately taken into account, then inadequacies in the anisotropy model are likely to manifest as significant error in the recovered velocity model. Conventionally, anisotropic FWI either employs an a priori anisotropy model, held fixed during FWI, or uses a local inversion scheme to recover anisotropy as part of FWI; both of these methods can be problematic. Constructing an anisotropy model prior to FWI often involves intensive (and hence expensive) iterative procedures. On the other hand, introducing multiple parameters to FWI itself increases the complexity of what is already an underdetermined problem. As an alternative I propose here a novel approach referred to as combined FWI. This uses a global inversion for long-wavelength acoustic anisotropy, involving no start model, while simultaneously updating P-wave velocity using mono-parameter local FWI. Combined FWI is then followed by multi-parameter local FWI to recover the detailed final model. To validate the combined FWI scheme, I evaluate its performance with several 2D synthetic datasets, and apply it to a full 3D field dataset. The synthetic results establish the combined FWI, as part of a two-stage workflow, as more accurate than an equivalent conventional workflow. The solution obtained from the field data reconciles well with in situ borehole measurements. Although combined FWI includes a global inversion, I demonstrate that it is nonetheless affordable and commercially practical for 3D field data.Open Acces

    Construção de um modelo inicial em profundidade usando métodos robustos de análise de velocidade por migração em tempo

    Get PDF
    Orientadores: Joerg Dietrich Wilhelm Schleicher, Maria Amélia Novais SchleicherTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de GeociênciasResumo: A necessidade de investigar regiões formadas por estruturas geológicas complexas tem motivado o desenvolvimento de métodos de imageamento que atuem no domínio da profundidade. Exemplos notáveis são as técnicas de migração pré-empilhamento em profundidade (PSDM, do inglês "prestack depth migration") e a tomografia de onda completa (FWT, do inglês "full-waveform tomography"). No entanto, a aplicação desses métodos enfrenta ao menos dois desafios: eles requerem (1) um modelo de velocidade (inicial) preciso, e (2) elevado poder computacional. Por outro lado, a migração em tempo provou ser um processo robusto e muito rápido, tornando-se rotineiramente empregado para o imageamento sísmico. Além disso, a construção de modelos de velocidade em tempo é um processo bem compreendido. Portanto, é altamente desejável usar as técnicas de conversão tempo-profundidade para construir, a partir desses modelos de velocidade no domi?nio do tempo, modelos de velocidade iniciais para te?cnicas que operam em profundidade. Neste trabalho, investigamos a aplicabilidade de um fluxo de trabalho formado por alguns recém-desenvolvidos métodos (semi-) automáticos de análise de velocidade de migração em tempo (MVA, do inglês "migration velocity analysis"), capazes de gerar modelos de velocidade e imagens migradas no tempo sem precisar de informações a priori, seguido por uma técnica robusta de conversão tempo-profundidade. Discutimos as vantagens e limitações desse fluxo de trabalho e suas perspectivas para se tornar uma ferramenta plenamente automática, capaz de gerar modelos de velocidade sísmica para o uso subsequente em métodos de FWT. Nos nossos testes em diferentes versões dos dados Marmousi, o procedimento proposto produziu modelos de velocidade iniciais suficientemente precisos para uma FWT sob condições quase ideais. Começando no modelo de velocidade do domínio do tempo convertido para profundidade, a FWT convergiu para um modelo final com qualidade comparável a quando feito a partir de uma versão suavizada do modelo de velocidade verdadeiro. Isso indica que a correta informação sobre a velocidade de fundo pode ser extraída com sucesso pela MVA automática no domínio do tempo mesmo em meios onde a migração em tempo não pode fornecer imagens sísmicas satisfatórias. Como resultado, esta tese não só contribui para o desenvolvimento de um fluxo de trabalho para a construção de modelos de velocidade iniciais para a FWT, mas também apresenta várias aplicações inovadorasAbstract: The need to investigate regions with complex geology has encouraged the development of imaging methods that act in the depth domain. Notable examples are prestack depth migration (PSDM) and full-waveform tomography (FWT). However, the application of these techniques faces at least two challenges: they require (1) an accurate (initial) velocity model and (2) massive computation power. In contrast, time migration has proven to be a fast and robust process, making it routinely used for seismic imaging. Moreover, time-domain velocity-model building is a well-understood process. Therefore, it is highly desirable to use time-to-depth conversion to construct starting models for depth-imaging techniques from these time-domain velocity models. In this work, we investigate the applicability of a workflow consisting of some recent (semi-) automatic time migration-velocity-analysis (MVA) methods, which can generate a velocity model and a time-migrated image without a priori information, followed by a robust time-to-depth conversion technique. We discuss advantages and limitations of this workflow and its perspectives to become a fully automatic tool, capable of generating initial seismic depth velocity models for subsequent FWT methods. In our tests on different versions of the Marmousi data, the proposed procedure produced sufficiently accurate initial models for an FWT under nearly ideal conditions. Starting at the depth-converted time-domain model, FWT converged to a final model of comparable quality as when starting at a smoothed version of the true velocity model. This indicates that correct background velocity information can be successfully extracted from automatic time-domain MVA even in media where time-migration cannot provide satisfactory seismic images. In effect, this thesis not only contributes to the development of a workflow for the construction of initial velocity-models for FWT but also presents several innovative applications thereofDoutoradoReservatórios e GestãoDoutor em Ciências e Engenharia de Petróle
    corecore