340 research outputs found

    Source and receiver deghosting by demigration-based supervised learning

    Get PDF
    Deghosting of marine seismic data is an important and challenging step in the seismic processing flow. We describe a novel approach to train a supervised convolutional neural network to perform joint source and receiver deghosting of single-component (hydrophone) data. The training dataset is generated by demigration of stacked depth migrated images into shot gathers with and without ghosts using the actual source and receiver locations from a real survey. To create demigrated data with ghosts, we need an estimate of the depth of the sources and receivers and the reflectivity of the sea surface. In the training process, we systematically perturbed these parameters to create variability in the ghost timing and amplitude and show that this makes the convolutional neural network more robust to variability in source/receiver depth, swells and sea surface reflectivity. We tested the new method on the Marmousi synthetic data and real North Sea field data and show that, in some respects, it performs better than a standard deterministic deghosting method based on least-squares inversion in the Ď„-p domain. On the synthetic data, we also demonstrate the robustness of the new method to variations in swells and sea-surface reflectivity.publishedVersio

    Mutual-Guided Dynamic Network for Image Fusion

    Full text link
    Image fusion aims to generate a high-quality image from multiple images captured under varying conditions. The key problem of this task is to preserve complementary information while filtering out irrelevant information for the fused result. However, existing methods address this problem by leveraging static convolutional neural networks (CNNs), suffering two inherent limitations during feature extraction, i.e., being unable to handle spatial-variant contents and lacking guidance from multiple inputs. In this paper, we propose a novel mutual-guided dynamic network (MGDN) for image fusion, which allows for effective information utilization across different locations and inputs. Specifically, we design a mutual-guided dynamic filter (MGDF) for adaptive feature extraction, composed of a mutual-guided cross-attention (MGCA) module and a dynamic filter predictor, where the former incorporates additional guidance from different inputs and the latter generates spatial-variant kernels for different locations. In addition, we introduce a parallel feature fusion (PFF) module to effectively fuse local and global information of the extracted features. To further reduce the redundancy among the extracted features while simultaneously preserving their shared structural information, we devise a novel loss function that combines the minimization of normalized mutual information (NMI) with an estimated gradient mask. Experimental results on five benchmark datasets demonstrate that our proposed method outperforms existing methods on four image fusion tasks. The code and model are publicly available at: https://github.com/Guanys-dar/MGDN.Comment: ACMMM 2023 accepte

    Locally Non-rigid Registration for Mobile HDR Photography

    Full text link
    Image registration for stack-based HDR photography is challenging. If not properly accounted for, camera motion and scene changes result in artifacts in the composite image. Unfortunately, existing methods to address this problem are either accurate, but too slow for mobile devices, or fast, but prone to failing. We propose a method that fills this void: our approach is extremely fast---under 700ms on a commercial tablet for a pair of 5MP images---and prevents the artifacts that arise from insufficient registration quality

    Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    Get PDF
    Velocity model building is a critical step in seismic reflection data processing. An optimum velocity field can lead to well focused images in time or depth domains. Taking into account the noisy and band limited nature of the seismic data, the computed velocity field can be considered as our best estimate of a set of possible velocity fields. Hence, all the calculated depths and the images produced are just our best approximation of the true subsurface. This study examines the quantification of uncertainty of the depths to drilling targets from two dimensional (2D) seismic reflection data using Bayesian statistics. The approach was tested in Mentelle Basin (south west of Australia), aiming to make depths predictions for stratigraphic targets of interest related with the International Ocean Discovery Program (IODP), leg 369. For the purposes of the project, Geoscience Australia 2D seismic profiles were reprocessed. In order to achieve robust predictions, the seismic reflection processing sequence was focused on improving the temporal resolution of the data by using deterministic deghosting filters in pre-stack and post-stack domains. The filters, combined with isotropic/anisotropic pre-stack time and depth migration algorithms, produced very good results in terms of seismic resolution and focusing of subsurface features. The application of the deghosting filters was the critical step for the subsequent probabilistic depth estimation of drilling targets. The best estimate of the velocity field along with the migrated seismic data were used as input to the Bayesian algorithm. The analysis, performed in one seismic profile intersecting the site location MBAS-4A, produced robust depth predictions for lithological boundaries of interest compared to the observed depths as reported in the IODP expedition. The significance of the result is more pronounced taking into account the complete lack of independent velocity information. Petrophysical information collected from the expedition was used to perform well-seismic tie, mapping the lithological boundaries with the reflectivity in the seismic profile. A very good match between observed and modelled traces was achieved and a new interpretation of the Mentelle Basin lithological boundaries in seismic image was provided. Velocity information from sonic logs was also implemented to perform anisotropic pre-stack depth migration. The migrated image successfully mapped the subsurface targets to their correct depth location while preserving the focus of the image. The pre-drilling depth estimation of subsurface targets using Bayesian statistics can be considered as a great example of successfully quantifying the uncertainty in depths and effectively merging seismic reflection data processing with statistical analysis. The derived well-seismic tie in MBAS-4A will be a valuable tool towards a more complete regional interpretation of the Mentelle Basin

    Reduction of Vibration-Induced Artifacts in Synthetic Aperture Radar Imagery

    Get PDF
    Target vibrations introduce nonstationary phase modulation, which is termed the micro-Doppler effect, into returned synthetic aperture radar (SAR) signals. This causes artifacts, or ghost targets, which appear near vibrating targets in reconstructed SAR images. Recently, a vibration estimation method based on the discrete fractional Fourier transform (DFrFT) has been developed. This method is capable of estimating the instantaneous vibration accelerations and vibration frequencies. In this paper, a deghosting method for vibrating targets in SAR images is proposed. For single-component vibrations, this method first exploits the estimation results provided by the DFrFT-based vibration estimation method to reconstruct the instantaneous vibration displacements. A reference signal, whose phase is modulated by the estimated vibration displacements, is then synthesized to compensate for the vibration-induced phase modulation in returned SAR signals before forming the SAR image. The performance of the proposed method with respect to the signal-to-noise and signalto-clutter ratios is analyzed using simulations. Experimental results using the Lynx SAR system show a substantial reduction in ghosting caused by a 1.5-cm 0.8-Hz target vibration in a true SAR image

    High Dynamic Range Imaging with Context-aware Transformer

    Full text link
    Avoiding the introduction of ghosts when synthesising LDR images as high dynamic range (HDR) images is a challenging task. Convolutional neural networks (CNNs) are effective for HDR ghost removal in general, but are challenging to deal with the LDR images if there are large movements or oversaturation/undersaturation. Existing dual-branch methods combining CNN and Transformer omit part of the information from non-reference images, while the features extracted by the CNN-based branch are bound to the kernel size with small receptive field, which are detrimental to the deblurring and the recovery of oversaturated/undersaturated regions. In this paper, we propose a novel hierarchical dual Transformer method for ghost-free HDR (HDT-HDR) images generation, which extracts global features and local features simultaneously. First, we use a CNN-based head with spatial attention mechanisms to extract features from all the LDR images. Second, the LDR features are delivered to the Hierarchical Dual Transformer (HDT). In each Dual Transformer (DT), the global features are extracted by the window-based Transformer, while the local details are extracted using the channel attention mechanism with deformable CNNs. Finally, the ghost free HDR image is obtained by dimensional mapping on the HDT output. Abundant experiments demonstrate that our HDT-HDR achieves the state-of-the-art performance among existing HDR ghost removal methods.Comment: 8 pages, 5 figure

    Alignment-free HDR Deghosting with Semantics Consistent Transformer

    Full text link
    High dynamic range (HDR) imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output. The essence is to leverage the contextual information, including both dynamic and static semantics, for better image generation. Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion. However, there is no research on jointly leveraging the dynamic and static context in a simultaneous manner. To delve into this problem, we propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules in the network. The spatial attention aims to deal with the intra-image correlation to model the dynamic motion, while the channel attention enables the inter-image intertwining to enhance the semantic consistency across frames. Aside from this, we introduce a novel realistic HDR dataset with more variations in foreground objects, environmental factors, and larger motions. Extensive comparisons on both conventional datasets and ours validate the effectiveness of our method, achieving the best trade-off on the performance and the computational cost
    • …
    corecore