5,473 research outputs found
Explainable Artificial Intelligence driven mask design for self-supervised seismic denoising
The presence of coherent noise in seismic data leads to errors and
uncertainties, and as such it is paramount to suppress noise as early and
efficiently as possible. Self-supervised denoising circumvents the common
requirement of deep learning procedures of having noisy-clean training pairs.
However, self-supervised coherent noise suppression methods require extensive
knowledge of the noise statistics. We propose the use of explainable artificial
intelligence approaches to see inside the black box that is the denoising
network and use the gained knowledge to replace the need for any prior
knowledge of the noise itself. This is achieved in practice by leveraging
bias-free networks and the direct linear link between input and output provided
by the associated Jacobian matrix; we show that a simple averaging of the
Jacobian contributions over a number of randomly selected input pixels,
provides an indication of the most effective mask to suppress noise present in
the data. The proposed method therefore becomes a fully automated denoising
procedure requiring no clean training labels or prior knowledge. Realistic
synthetic examples with noise signals of varying complexities, ranging from
simple time-correlated noise to complex pseudo rig noise propagating at the
velocity of the ocean, are used to validate the proposed approach. Its
automated nature is highlighted further by an application to two field
datasets. Without any substantial pre-processing or any knowledge of the
acquisition environment, the automatically identified blind-masks are shown to
perform well in suppressing both trace-wise noise in common shot gathers from
the Volve marine dataset and colored noise in post stack seismic images from a
land seismic survey
A self-supervised scheme for ground roll suppression
In recent years, self-supervised procedures have advanced the field of
seismic noise attenuation, due to not requiring a massive amount of clean
labeled data in the training stage, an unobtainable requirement for seismic
data. However, current self-supervised methods usually suppress simple noise
types, such as random and trace-wise noise, instead of the complicated, aliased
ground roll. Here, we propose an adaptation of a self-supervised procedure,
namely, blind-fan networks, to remove aliased ground roll within seismic shot
gathers without any requirement for clean data. The self-supervised denoising
procedure is implemented by designing a noise mask with a predefined direction
to avoid the coherency of the ground roll being learned by the network while
predicting one pixel's value. Numerical experiments on synthetic and field
seismic data demonstrate that our method can effectively attenuate aliased
ground roll.Comment: 19 pages, 12 figures
Spatially Adaptive Self-Supervised Learning for Real-World Image Denoising
Significant progress has been made in self-supervised image denoising (SSID)
in the recent few years. However, most methods focus on dealing with spatially
independent noise, and they have little practicality on real-world sRGB images
with spatially correlated noise. Although pixel-shuffle downsampling has been
suggested for breaking the noise correlation, it breaks the original
information of images, which limits the denoising performance. In this paper,
we propose a novel perspective to solve this problem, i.e., seeking for
spatially adaptive supervision for real-world sRGB image denoising.
Specifically, we take into account the respective characteristics of flat and
textured regions in noisy images, and construct supervisions for them
separately. For flat areas, the supervision can be safely derived from
non-adjacent pixels, which are much far from the current pixel for excluding
the influence of the noise-correlated ones. And we extend the blind-spot
network to a blind-neighborhood network (BNN) for providing supervision on flat
areas. For textured regions, the supervision has to be closely related to the
content of adjacent pixels. And we present a locally aware network (LAN) to
meet the requirement, while LAN itself is selectively supervised with the
output of BNN. Combining these two supervisions, a denoising network (e.g.,
U-Net) can be well-trained. Extensive experiments show that our method performs
favorably against state-of-the-art SSID methods on real-world sRGB photographs.
The code is available at https://github.com/nagejacob/SpatiallyAdaptiveSSID.Comment: CVPR 2023 Camera Read
Semi-blind-trace algorithm for self-supervised attenuation of trace-wise coherent noise
Trace-wise noise is a type of noise often seen in seismic data, which is characterized by vertical coherency and horizontal incoherency. Using self-supervised deep learning to attenuate this type of noise, the conventional blind-trace deep learning trains a network to blindly reconstruct each trace in the data from its surrounding traces; it attenuates isolated trace-wise noise but causes signal leakage in clean and noisy traces and reconstruction errors next to each noisy trace. To reduce signal leakage and improve denoising, we propose a new loss function and masking procedure in a semi-blind-trace deep learning framework. Our hybrid loss function has weighted active zones that cover masked and non-masked traces. Therefore, the network is not blinded to clean traces during their reconstruction. During training, we dynamically change the masks' characteristics. The goal is to train the network to learn the characteristics of the signal instead of noise. The proposed algorithm enables the designed U-net to detect and attenuate trace-wise noise without having prior information about the noise. A new hyperparameter of our method is the relative weight between the masked and non-masked traces' contribution to the loss function. Numerical experiments show that selecting a small value for this parameter is enough to significantly decrease signal leakage. The proposed algorithm is tested on synthetic and real off-shore and land data sets with different noises. The results show the superb ability of the method to attenuate trace-wise noise while preserving other events. An implementation of the proposed algorithm as a Python code is also made available
Fully Unsupervised Image Denoising, Diversity Denoising and Image Segmentation with Limited Annotations
Understanding the processes of cellular development and the interplay of cell shape changes, division and migration requires investigation of developmental processes at the spatial resolution of single cell. Biomedical imaging experiments enable the study of dynamic processes as they occur in living organisms. While biomedical imaging is essential, a key component of exposing unknown biological phenomena is quantitative image analysis. Biomedical images, especially microscopy images, are usually noisy owing to practical limitations such as available photon budget, sample sensitivity, etc. Additionally, microscopy images often contain artefacts due to the optical aberrations in microscopes or due to imperfections in camera sensor and internal electronics. The noisy nature of images as well as the artefacts prohibit accurate downstream analysis such as cell segmentation. Although countless approaches have been proposed for image denoising, artefact removal and segmentation, supervised Deep Learning (DL) based content-aware algorithms are currently the best performing for all these tasks.
Supervised DL based methods are plagued by many practical limitations. Supervised denoising and artefact removal algorithms require paired corrupted and high quality images for training. Obtaining such image pairs can be very hard and virtually impossible in most biomedical imaging applications owing to photosensitivity and the dynamic nature of the samples being imaged. Similarly, supervised DL based segmentation methods need copious amounts of annotated data for training, which is often very expensive to obtain. Owing to these restrictions, it is imperative to look beyond supervised methods. The objective of this thesis is to develop novel unsupervised alternatives for image denoising, and artefact removal as well as semisupervised approaches for image segmentation.
The first part of this thesis deals with unsupervised image denoising and artefact removal. For unsupervised image denoising task, this thesis first introduces a probabilistic approach for training DL based methods using parametric models of imaging noise. Next, a novel unsupervised diversity denoising framework is presented which addresses the fundamentally non-unique inverse nature of image denoising by generating multiple plausible denoised solutions for any given noisy image. Finally, interesting properties of the diversity denoising methods are presented which make them suitable for unsupervised spatial artefact removal in microscopy and medical imaging applications.
In the second part of this thesis, the problem of cell/nucleus segmentation is addressed. The focus is especially on practical scenarios where ground truth annotations for training DL based segmentation methods are scarcely available. Unsupervised denoising is used as an aid to improve segmentation performance in the presence of limited annotations. Several training strategies are presented in this work to leverage the representations learned by unsupervised denoising networks to enable better cell/nucleus segmentation in microscopy data. Apart from DL based segmentation methods, a proof-of-concept is introduced which views cell/nucleus segmentation from the perspective of solving a label fusion problem. This method, through limited human interaction, learns to choose the best possible segmentation for each cell/nucleus using only a pool of diverse (and possibly faulty) segmentation hypotheses as input.
In summary, this thesis seeks to introduce new unsupervised denoising and artefact removal methods as well as semi-supervised segmentation methods which can be easily deployed to directly and immediately benefit biomedical practitioners with their research
Direct Unsupervised Denoising
Traditional supervised denoisers are trained using pairs of noisy input and
clean target images. They learn to predict a central tendency of the posterior
distribution over possible clean images. When, e.g., trained with the popular
quadratic loss function, the network's output will correspond to the minimum
mean square error (MMSE) estimate. Unsupervised denoisers based on Variational
AutoEncoders (VAEs) have succeeded in achieving state-of-the-art results while
requiring only unpaired noisy data as training input. In contrast to the
traditional supervised approach, unsupervised denoisers do not directly produce
a single prediction, such as the MMSE estimate, but allow us to draw samples
from the posterior distribution of clean solutions corresponding to the noisy
input. To approximate the MMSE estimate during inference, unsupervised methods
have to create and draw a large number of samples - a computationally expensive
process - rendering the approach inapplicable in many situations. Here, we
present an alternative approach that trains a deterministic network alongside
the VAE to directly predict a central tendency. Our method achieves results
that surpass the results achieved by the unsupervised method at a fraction of
the computational cost
Semi-blind-trace algorithm for self-supervised attenuation of trace-wise coherent noise
Trace-wise noise is a type of noise often seen in seismic data, which is
characterized by vertical coherency and horizontal incoherency. Using
self-supervised deep learning to attenuate this type of noise, the conventional
blind-trace deep learning trains a network to blindly reconstruct each trace in
the data from its surrounding traces; it attenuates isolated trace-wise noise
but causes signal leakage in clean and noisy traces and reconstruction errors
next to each noisy trace. To reduce signal leakage and improve denoising, we
propose a new loss function and masking procedure in semi-blind-trace deep
learning. Our hybrid loss function has weighted active zones that cover masked
and non-masked traces. Therefore, the network is not blinded to clean traces
during their reconstruction. During training, we dynamically change the masks'
characteristics. The goal is to train the network to learn the characteristics
of the signal instead of noise. The proposed algorithm enables the designed
U-net to detect and attenuate trace-wise noise without having prior information
about the noise. A new hyperparameter of our method is the relative weight
between the masked and non-masked traces' contribution to the loss function.
Numerical experiments show that selecting a small value for this parameter is
enough to significantly decrease signal leakage. The proposed algorithm is
tested on synthetic and real off-shore and land datasets with different noises.
The results show the superb ability of the method to attenuate trace-wise noise
while preserving other events. An implementation of the proposed algorithm as a
Python code is also made available
Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction
In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data.
Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations.
Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT).
In the next paragraphs I will summarize the individual contributions briefly.
Electron microscopy is the go to method for high-resolution images in biological research.
Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses.
However, slow scanning speeds are required to obtain SEM images of sufficient quality.
In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks.
Once such a network is trained, it can be applied to noisy data to restore high quality images.
With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in - to -fold imaging speedups for SEM imaging.
In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions.
However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images.
Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually.
To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series.
An implementation of cryoCARE is publicly available as Scipion (de la Rosa-TrevĂn et al. 2016) plugin.
Next, I will discuss the problem of self-supervised image denoising.
With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied.
However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system.
In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach.
Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012).
In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks.
I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE).
I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction.
The missing wedge artefacts in tomographic imaging originate in sparse-view imaging.
Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images.
However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space.
I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients.
Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents
Summary iii
Acknowledgements v
1 Introduction 1
1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3
1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4
1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8
1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11
2 Denoising in Electron Microscopy 15
2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19
2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21
2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23
2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25
2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27
2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29
2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31
2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32
2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33
3 Noise2Void: Self-Supervised Denoising 35
3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37
3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41
3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44
3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47
3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48
3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50
4 Fourier Image Transformer 53
4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55
4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56
4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57
4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57
4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59
4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60
4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60
4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61
4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64
4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66
4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5 Conclusions and Outlook 7
Generalizable Denoising of Microscopy Images using Generative Adversarial Networks and Contrastive Learning
Microscopy images often suffer from high levels of noise, which can hinder
further analysis and interpretation. Content-aware image restoration (CARE)
methods have been proposed to address this issue, but they often require large
amounts of training data and suffer from over-fitting. To overcome these
challenges, we propose a novel framework for few-shot microscopy image
denoising. Our approach combines a generative adversarial network (GAN) trained
via contrastive learning (CL) with two structure preserving loss terms
(Structural Similarity Index and Total Variation loss) to further improve the
quality of the denoised images using little data. We demonstrate the
effectiveness of our method on three well-known microscopy imaging datasets,
and show that we can drastically reduce the amount of training data while
retaining the quality of the denoising, thus alleviating the burden of
acquiring paired data and enabling few-shot learning. The proposed framework
can be easily extended to other image restoration tasks and has the potential
to significantly advance the field of microscopy image analysis
- …