77 research outputs found

    Learning to Label Seismic Structures with Deconvolution Networks and Weak Labels

    Full text link
    Recently, there has been increasing interest in using deep learning techniques for various seismic interpretation tasks. However, unlike shallow machine learning models, deep learning models are often far more complex and can have hundreds of millions of free parameters. This not only means that large amounts of computational resources are needed to train these models, but more critically, they require vast amounts of labeled training data as well. In this work, we show how automatically-generated weak labels can be effectively used to overcome this problem and train powerful deep learning models for labeling seismic structures in large seismic volumes. To achieve this, we automatically generate thousands of weak labels and use them to train a deconvolutional network for labeling fault, salt dome, and chaotic regions within the Netherlands F3 block. Furthermore, we show how modifying the loss function to take into account the weak training labels helps reduce false positives in the labeling results. The benefit of this work is that it enables the effective training and deployment of deep learning models to various seismic interpretation tasks without requiring any manual labeling effort. We show excellent results on the Netherlands F3 block, and show how our model outperforms other baseline models.Comment: Published in the proceedings of the Society of Exploration Geophysicists' 2018 Annual Meetin

    A comparative study of texture attributes for characterizing subsurface structures in seismic volumes

    Get PDF
    In this paper, we explore how to computationally characterize subsurface geological structures presented in seismic volumes using texture attributes. For this purpose, we conduct a comparative study of typical texture attributes presented in the image processing literature. We focus on spatial attributes in this study and examine them in a new application for seismic interpretation, i.e., seismic volume labeling. For this application, a data volume is automatically segmented into various structures, each assigned with its corresponding label. If the labels are assigned with reasonable accuracy, such volume labeling will help initiate an interpretation process in a more effective manner. Our investigation proves the feasibility of accomplishing this task using texture attributes. Through the study, we also identify advantages and disadvantages associated with each attribute

    Learning from seismic data to characterize subsurface volumes

    Get PDF
    The exponential growth of collected data from seismic surveys makes it impossible for interpreters to manually inspect, analyze and annotate all collected data. Deep learning has proved to be a potential mechanism to overcome big data problems in various computer vision tasks such as image classification and semantic segmentation. However, the applications of deep learning are limited in the field of subsurface volume characterization due to the limited availability of consistently-annotated seismic datasets. Obtaining annotations of seismic data is a labor-intensive process that requires field knowledge. Moreover, seismic interpreters rely on the few direct high-resolution measurements of the subsurface from well-logs and core data to confirm their interpretations. Different interpreters might arrive at different valid interpretations of the subsurface, all of which are in agreement with well-logs and core data. Therefore, to successfully utilize deep learning for subsurface characterization, one must address and circumvent the lack or shortage of consistent annotated data. In this dissertation, we introduce a learning-based physics-guided subsurface volume characterization framework that can learn from limited inconsistently-annotated data. The introduced framework integrates seismic data and the limited well-log data to characterize the subsurface at a higher-than-seismic resolution. The introduced framework takes into account the physics that governs seismic data to overcome noise and artifacts that are often present in the data. Integrating a physical model in deep-learning frameworks improves their generalization ability beyond the training data. Furthermore, the physical model enables deep networks to learn from unlabeled data, in addition to a few annotated examples, in a semi-supervised learning scheme. Applications of the introduced framework are not limited to subsurface volume characterization, it can be extended to other domains in which data represent a physical phenomenon and annotated data is limited.Ph.D

    Visual Representation Learning with Limited Supervision

    Get PDF
    The quality of a Computer Vision system is proportional to the rigor of data representation it is built upon. Learning expressive representations of images is therefore the centerpiece to almost every computer vision application, including image search, object detection and classification, human re-identification, object tracking, pose understanding, image-to-image translation, and embodied agent navigation to name a few. Deep Neural Networks are most often seen among the modern methods of representation learning. The limitation is, however, that deep representation learning methods require extremely large amounts of manually labeled data for training. Clearly, annotating vast amounts of images for various environments is infeasible due to cost and time constraints. This requirement of obtaining labeled data is a prime restriction regarding pace of the development of visual recognition systems. In order to cope with the exponentially growing amounts of visual data generated daily, machine learning algorithms have to at least strive to scale at a similar rate. The second challenge consists in the learned representations having to generalize to novel objects, classes, environments and tasks in order to accommodate to the diversity of the visual world. Despite the evergrowing number of recent publications tangentially addressing the topic of learning generalizable representations, efficient generalization is yet to be achieved. This dissertation attempts to tackle the problem of learning visual representations that can generalize to novel settings while requiring few labeled examples. In this research, we study the limitations of the existing supervised representation learning approaches and propose a framework that improves the generalization of learned features by exploiting visual similarities between images which are not captured by provided manual annotations. Furthermore, to mitigate the common requirement of large scale manually annotated datasets, we propose several approaches that can learn expressive representations without human-attributed labels, in a self-supervised fashion, by grouping highly-similar samples into surrogate classes based on progressively learned representations. The development of computer vision as science is preconditioned upon the seamless ability of a machine to record and disentangle pictures' attributes that were expected to only be conceived by humans. As such, particular interest was dedicated to the ability to analyze the means of artistic expression and style which depicts a more complex task than merely breaking an image down to colors and pixels. The ultimate test for this ability is the task of style transfer which involves altering the style of an image while keeping its content. An effective solution of style transfer requires learning such image representation which would allow disentangling image style and its content. Moreover, particular artistic styles come with idiosyncrasies that affect which content details should be preserved and which discarded. Another pitfall here is that it is impossible to get pixel-wise annotations of style and how the style should be altered. We address this problem by proposing an unsupervised approach that enables encoding the image content in such a way that is required by a particular style. The proposed approach exchanges the style of an input image by first extracting the content representation in a style-aware way and then rendering it in a new style using a style-specific decoder network, achieving compelling results in image and video stylization. Finally, we combine supervised and self-supervised representation learning techniques for the task of human and animals pose understanding. The proposed method enables transfer of the representation learned for recognition of human poses to proximal mammal species without using labeled animal images. This approach is not limited to dense pose estimation and could potentially enable autonomous agents from robots to self-driving cars to retrain themselves and adapt to novel environments based on learning from previous experiences
    • …
    corecore