75 research outputs found

    Seismic Data Interpolation based on Denoising Diffusion Implicit Models with Resampling

    Full text link
    The incompleteness of the seismic data caused by missing traces along the spatial extension is a common issue in seismic acquisition due to the existence of obstacles and economic constraints, which severely impairs the imaging quality of subsurface geological structures. Recently, deep learning-based seismic interpolation methods have attained promising progress, while achieving stable training of generative adversarial networks is not easy, and performance degradation is usually notable if the missing patterns in the testing and training do not match. In this paper, we propose a novel seismic denoising diffusion implicit model with resampling. The model training is established on the denoising diffusion probabilistic model, where U-Net is equipped with the multi-head self-attention to match the noise in each step. The cosine noise schedule, serving as the global noise configuration, promotes the high utilization of known trace information by accelerating the passage of the excessive noise stages. The model inference utilizes the denoising diffusion implicit model, conditioning on the known traces, to enable high-quality interpolation with fewer diffusion steps. To enhance the coherency between the known traces and the missing traces within each reverse step, the inference process integrates a resampling strategy to achieve an information recap on the former interpolated traces. Extensive experiments conducted on synthetic and field seismic data validate the superiority of our model and its robustness on various missing patterns. In addition, uncertainty quantification and ablation studies are also investigated.Comment: 14 pages, 13 figure

    Generative adversarial networks review in earthquake-related engineering fields

    Get PDF
    Within seismology, geology, civil and structural engineering, deep learning (DL), especially via generative adversarial networks (GANs), represents an innovative, engaging, and advantageous way to generate reliable synthetic data that represent actual samples' characteristics, providing a handy data augmentation tool. Indeed, in many practical applications, obtaining a significant number of high-quality information is demanding. Data augmentation is generally based on artificial intelligence (AI) and machine learning data-driven models. The DL GAN-based data augmentation approach for generating synthetic seismic signals revolutionized the current data augmentation paradigm. This study delivers a critical state-of-art review, explaining recent research into AI-based GAN synthetic generation of ground motion signals or seismic events, and also with a comprehensive insight into seismic-related geophysical studies. This study may be relevant, especially for the earth and planetary science, geology and seismology, oil and gas exploration, and on the other hand for assessing the seismic response of buildings and infrastructures, seismic detection tasks, and general structural and civil engineering applications. Furthermore, highlighting the strengths and limitations of the current studies on adversarial learning applied to seismology may help to guide research efforts in the next future toward the most promising directions

    Anti-Aliasing Add-On for Deep Prior Seismic Data Interpolation

    Full text link
    Data interpolation is a fundamental step in any seismic processing workflow. Among machine learning techniques recently proposed to solve data interpolation as an inverse problem, Deep Prior paradigm aims at employing a convolutional neural network to capture priors on the data in order to regularize the inversion. However, this technique lacks of reconstruction precision when interpolating highly decimated data due to the presence of aliasing. In this work, we propose to improve Deep Prior inversion by adding a directional Laplacian as regularization term to the problem. This regularizer drives the optimization towards solutions that honor the slopes estimated from the interpolated data low frequencies. We provide some numerical examples to showcase the methodology devised in this manuscript, showing that our results are less prone to aliasing also in presence of noisy and corrupted data

    MDA GAN: Adversarial-Learning-based 3-D Seismic Data Interpolation and Reconstruction for Complex Missing

    Full text link
    The interpolation and reconstruction of missing traces is a crucial step in seismic data processing, moreover it is also a highly ill-posed problem, especially for complex cases such as high-ratio random discrete missing, continuous missing and missing in fault-rich or salt body surveys. These complex cases are rarely mentioned in current sparse or low-rank priorbased and deep learning-based approaches. To cope with complex missing cases, we propose Multi-Dimensional Adversarial GAN (MDA GAN), a novel 3-D GAN framework. It employs three discriminators to ensure the consistency of the reconstructed data with the original data distribution in each dimension. The feature splicing module (FSM) is designed and embedded into the generator of this framework, which automatically splices the features of the unmissing part with those of the reconstructed part (missing part), thus fully preserving the information of the unmissing part. To prevent pixel distortion in the seismic data caused by the adversarial learning process, we propose a new reconstruction loss Tanh Cross Entropy (TCE) loss to provide smoother gradients. We experimentally verified the effectiveness of the individual components of the study and then tested the method on multiple publicly available data. The method achieves reasonable reconstructions for up to 95% of random discrete missing, 100 traces of continuous missing and more complex hybrid missing. In surveys of fault-rich and salt bodies, the method can achieve promising reconstructions with up to 75% missing in each of the three directions (98.2% in total).Comment: This work has been submitted to journal for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Domain-Aware Few-Shot Learning for Optical Coherence Tomography Noise Reduction

    Full text link
    Speckle noise has long been an extensively studied problem in medical imaging. In recent years, there have been significant advances in leveraging deep learning methods for noise reduction. Nevertheless, adaptation of supervised learning models to unseen domains remains a challenging problem. Specifically, deep neural networks (DNNs) trained for computational imaging tasks are vulnerable to changes in the acquisition system's physical parameters, such as: sampling space, resolution, and contrast. Even within the same acquisition system, performance degrades across datasets of different biological tissues. In this work, we propose a few-shot supervised learning framework for optical coherence tomography (OCT) noise reduction, that offers a dramatic increase in training speed and requires only a single image, or part of an image, and a corresponding speckle suppressed ground truth, for training. Furthermore, we formulate the domain shift problem for OCT diverse imaging systems, and prove that the output resolution of a despeckling trained model is determined by the source domain resolution. We also provide possible remedies. We propose different practical implementations of our approach, verify and compare their applicability, robustness, and computational efficiency. Our results demonstrate significant potential for generally improving sample complexity, generalization, and time efficiency, for coherent and non-coherent noise reduction via supervised learning models, that can also be leveraged for other real-time computer vision applications

    Accelerating inference in cosmology and seismology with generative models

    Get PDF
    Statistical analyses in many physical sciences require running simulations of the system that is being examined. Such simulations provide complementary information to the theoretical analytic models, and represent an invaluable tool to investigate the dynamics of complex systems. However, running simulations is often computationally expensive, and the high number of required mocks to obtain sufficient statistical precision often makes the problem intractable. In recent years, machine learning has emerged as a possible solution to speed up the generation of scientific simulations. Machine learning generative models usually rely on iteratively feeding some true simulations to the algorithm, until it learns the important common features and is capable of producing accurate simulations in a fraction of the time. In this thesis, advanced machine learning algorithms are explored and applied to the challenge of accelerating physical simulations. Various techniques are applied to problems in cosmology and seismology, showing benefits and limitations of such an approach through a critical analysis. The algorithms are applied to compelling problems in the fields, including surrogate models for the seismic wave equation, the emulation of cosmological summary statistics, and the fast generation of large simulations of the Universe. These problems are formulated within a relevant statistical framework, and tied to real data analysis pipelines. In the conclusions, a critical overview of the results is provided, together with an outlook over possible future expansions of the work presented in the thesis
    • …
    corecore