14 research outputs found
Including Physics in Deep Learning -- An example from 4D seismic pressure saturation inversion
Geoscience data often have to rely on strong priors in the face of
uncertainty. Additionally, we often try to detect or model anomalous sparse
data that can appear as an outlier in machine learning models. These are
classic examples of imbalanced learning. Approaching these problems can benefit
from including prior information from physics models or transforming data to a
beneficial domain. We show an example of including physical information in the
architecture of a neural network as prior information. We go on to present
noise injection at training time to successfully transfer the network from
synthetic data to field data.Comment: 5 pages, 5 figures, workshop, extended abstract, EAGE 2019 Workshop
Programme, European Association of Geoscientists and Engineer
Information Theory Considerations in Patch-based Training of Deep Neural Networks on Seismic Time-Series
Recent advances in machine learning relies on convolutional deep neural networks. These are often trained on cropped image patches. Pertaining to non-stationary seismic signals this may introduce low frequency noise and non-generalizability
Gaussian Mixture Models for Robust Unsupervised Scanning-Electron Microscopy Image Segmentation of North Sea Chalk
Scanning-Electron images from North Sea Chalk are studied for important rock properties. To relieve this manual labor, we investigated several standard image processing methods that underperformed on complicated chalk. Due to the lack of manually labeled data, deep neural networks could not be adequately applied. Gaussian Mixture Models learnt a two-fold representation that separated the background well from the rock. Subsequent morphological filtering cleans up the prediction and enables automatic analysis. <br
Improving medium-range ensemble weather forecasts with hierarchical ensemble transformers
Statistical post-processing of global ensemble weather forecasts is revisited
by leveraging recent developments in machine learning. Verification of past
forecasts is exploited to learn systematic deficiencies of numerical weather
predictions in order to boost post-processed forecast performance. Here, we
introduce PoET, a post-processing approach based on hierarchical transformers.
PoET has 2 major characteristics: 1) the post-processing is applied directly to
the ensemble members rather than to a predictive distribution or a functional
of it, and 2) the method is ensemble-size agnostic in the sense that the number
of ensemble members in training and inference mode can differ. The PoET output
is a set of calibrated members that has the same size as the original ensemble
but with improved reliability. Performance assessments show that PoET can bring
up to 20% improvement in skill globally for 2m temperature and 2% for
precipitation forecasts and outperforms the simpler statistical
member-by-member method, used here as a competitive benchmark. PoET is also
applied to the ENS10 benchmark dataset for ensemble post-processing and
provides better results when compared to other deep learning solutions that are
evaluated for most parameters. Furthermore, because each ensemble member is
calibrated separately, downstream applications should directly benefit from the
improvement made on the ensemble forecast with post-processing
Rapid seismic domain transfer: Seismic velocity inversion and modeling using deep generative neural networks
Traditional physics-based approaches to infer sub-surface properties such as
full-waveform inversion or reflectivity inversion are time-consuming and
computationally expensive. We present a deep-learning technique that eliminates
the need for these computationally complex methods by posing the problem as one
of domain transfer. Our solution is based on a deep convolutional generative
adversarial network and dramatically reduces computation time. Training based
on two different types of synthetic data produced a neural network that
generates realistic velocity models when applied to a real dataset. The
system's ability to generalize means it is robust against the inherent
occurrence of velocity errors and artifacts in both training and test datasets.Comment: Extended abstract submitted to EAGE 2018, 5 pages, 3 figure