8 research outputs found

    Machine Learning for Seismic Exploration: where are we and how far are we from the Holy Grail?

    Get PDF
    Machine Learning (ML) applications in seismic exploration are growing faster than applications in other industry fields, mainly due to the large amount of acquired data for the exploration industry. The ML algorithms are constantly being implemented to almost all the steps involved in seismic processing and interpretation workflow, mainly for automation, processing time reduction, efficiency and in some cases for improving the results. We carried out a literature-based analysis of existing ML-based seismic processing and interpretation published in SEG and EAGE literature repositories and derived a detailed overview of the main ML thrusts in different seismic applications. For each publication, we extracted various metadata about ML implementations and performances. The data indicate that current ML implementations in seismic exploration are focused on individual tasks rather than a disruptive change in processing and interpretation workflows. The metadata shows that the main targets of ML applications for seismic processing are denoising, velocity model building and first break picking, whereas for seismic interpretation, they are fault detection, lithofacies classification and geo-body identification. Through the metadata available in publications, we obtained indices related to computational power efficiency, data preparation simplicity, real data test rate of the ML model, diversity of ML methods, etc. and we used them to approximate the level of efficiency, effectivity and applicability of the current ML-based seismic processing and interpretation tasks. The indices of ML-based processing tasks show that current ML-based denoising and frequency extrapolation have higher efficiency, whereas ML-based QC is more effective and applicable compared to other processing tasks. Among the interpretation tasks, ML-based impedance inversion shows high efficiency, whereas high effectivity is depicted for fault detection. ML-based Lithofacies classification, stratigraphic sequence identification and petro/rock properties inversion exhibit high applicability among other interpretation tasks

    Seismic data conditioning, attribute analysis, and machine-learning facies classification: applications to Texas panhandle, Australia, New Zealand, and Gulf of Mexico

    Get PDF
    Whether analyzed by a human interpreter or by a machine learning algorithm, 3D seismic interpretation is only as good as the data that goes into it. The goal of seismic processing is to minimize noise and enhance signal to provide the most accurate image of the subsurface. Once imaged, the resulting migrated data volume can be further enhanced to suppress random and cross-cutting coherent noise and to better balance the spectrum to improve vertical resolution. Next, seismic attributes enhance subtle geologic features that may be otherwise overlooked. At this point, skilled human interpreters are very adept at not only seeing patterns in the data, but also in constructing correlations in their brain between multiple attributes and geologic features of interest. Machine learning algorithms are not yet at this point. Several machine learning algorithms require, and many perform better on data that exhibit Gaussian statistics, such that we need to carefully scale the attribute volumes to be analyzed. The application of filters that block and smooth the attribute volume, mimicking what a human interpreter “sees” provide further improvements. In this dissertation, I address most of these data conditioning challenges, as well as adapting and recoding the machine learning algorithms themselves. Conventional imaging of the shallow targets often results in severe migration aliasing. To improve the interpretation of a shallow fractured-basement reservoir in the Texas Panhandle, I developed a data conditioning technique called constrained conjugate-gradient least-squares migration to the prestack unmigrated data of the study area. I found that constrained conjugate-gradient least-squares migration can increase the signal-to-noise ratio, suppress migration artifacts, and improve seismic inversion results. Although 3D seismic surveys are routinely acquired, in frontier areas, much of our data consist of a grid of 2D seismic lines. Few publications discuss the application and limitations of modern seismic attributes to 2D lines, and fewer still the application of machine learning. I used a grid of 2D lines acquired over a turbidite channel system and carbonate sequences in the Exmouth Plateau, North Carnarvon Basin, Australia, to address this question. First, I modified 3D data conditioning workflows including nonlinear spectral balancing and structure-oriented filtering, and found that spectral balancing followed by structure-oriented filtering provides superior results. All of the more common attributes perform well, but with analysis of 2D lines providing apparent dip and apparent curvature in the inline direction rather than true dip magnitude and azimuth, and most-positive and most-negative curvature and their strikes. I analyzed coherence, curvature, reflector convergence, and envelope attributes using self-organizing maps and was able to successfully map turbidite canyon, carbonate mounds, and mass-transport complexes (MTCs) in the study area. Although some attributes exhibit Gaussian statistics, most do not. Although many machine learning algorithms are based on Gaussian statistics, most applications apply a simple Z-score normalization. I therefore compared the results of seismic facies classification of a Canterbury Basin turbidite system when using the traditional Z-score normalization versus one I developed that addresses skewness, kurtosis, and other scaling features in the attribute histogram. I found that logarithmic normalizations of skewed distributions are better input to unsupervised PCA, ICA, SOM, and GTM classification algorithms, but are worse for the supervised learning PNN classification algorithm. In contrast, supervised classification benefits greatly from a class-dependent normalization scheme, where the training data are normalized differently for each class

    Application of machine learning for the extrapolation of seismic data

    Get PDF
    Low frequencies in seismic data are often challenging to acquire. Without low frequencies, though, a method like full-waveform inversion might fail due to cycle-skipping. This thesis aims to investigate the potential of neural networks for the task of low-frequency extrapolation to overcome aforementioned problem. Several steps are needed to achieve this goal: First, suitable data for training and testing the network must be found. Second, the data must be pre-processed to condition them for machine learning and efficient application. Third, a specific workflow for the task of low-frequency extrapolation must be designed. Finally, the trained network can be applied to data it has not seen before and compared to reference data. In this work, synthetic data are used for training and evaluation because in such a controlled experiment the target for the network is known. For this purpose, 30 random but geologically plausible subsurface models were generated based on a simplified geology around the Asse II salt mine, and used for finite-difference simulations of seismograms. The corresponding shot gathers were pre-processed by, among others, normalizing them and splitting them up into patches, and fed into a convolutional neural network (U-Net) to assess the network’s performance and its ability to reconstruct the data. Two different approaches were investigated for the task of low-frequency extrapolation. The first approach is based on using only low frequencies as the network’s target, while the second approach has the full bandwidth as target. The latter yielded superior results and was therefore chosen for subsequent applications. Further tests of the network design led to the introduction of ResNet blocks instead of simple convolutions in the U-Net layers, and the use of the mean-absolute-error instead of the mean-squared-error loss function. The final network designed in this way was then applied to the synthetic data originally reserved for testing. It turned out that the chosen method is able to successfully extrapolate low frequencies by more than half an octave (from about 8 to 5 Hz) given the experimental setup at hand. Although the results start to deteriorate in the low-frequency band for larger offsets, full-waveform inversion will overall benefit from the application of the presented machine learning approach
    corecore