120 research outputs found

    Blind Image Denoising using Supervised and Unsupervised Learning

    Get PDF
    Image denoising is an important problem in image processing and computer vision. In real-world applications, denoising is often a pre-processing step (so-called low-level vision task) before image segmentation, object detection, and recognition at higher levels. Traditional image denoising algorithms often make idealistic assumptions with the noise (e.g., additive white Gaussian or Poisson). However, the noise in the real-world images such as high-ISO photos and microscopic fluorescence images are more complex. Accordingly, the performance of those traditional approaches degrades rapidly on real-world data. Such blind image denoising has remained an open problem in the literature. In this project, we report two competing approaches toward blind image denoising: supervised and unsupervised learning. We report the principles, performance, differences, merits, and technical potential of a few blind denoising algorithms. Supervised learning is a regression model like CNN with a large number of pairs of corrupted images and clean images. This feed-forward convolution neural network separates noise from the image. The reason for using CNN is its deep architecture for exploiting image characteristics, possible parallel computation with modern powerful GPU’s and advances in regularization and learning methods to train. The integration of residual learning and batch normalization is effective in speeding up the training and improving the denoising performance. Here we apply basic statistical reasoning to signaling reconstruction to map corrupted observations to clean targets Recently, few deep learning algorithms have been investigated that do not require ground truth training images. Noise2Noise is an unsupervised training method created for various applications including denoising with Gaussian, Poisson noise. In the N2N model, we observe that we can often learn to turn bad images to good images just by looking at bad images. An experimental study is conducted on practical properties of noisy-target training at performance levels close to using the clean target data. Further, Noise2Void(N2V) is a self-supervised method that takes one step further. This is method does not require clean image data nor noisy image data for training. It is directly trained on the current image that is to be denoised where other methods cannot do it. This is useful for datasets where we cannot find either a noisy dataset or a pair of clean images for training i.e., biomedical image data

    Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction

    Get PDF
    In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data. Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations. Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT). In the next paragraphs I will summarize the individual contributions briefly. Electron microscopy is the go to method for high-resolution images in biological research. Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses. However, slow scanning speeds are required to obtain SEM images of sufficient quality. In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks. Once such a network is trained, it can be applied to noisy data to restore high quality images. With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in 4040- to 5050-fold imaging speedups for SEM imaging. In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions. However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images. Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually. To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series. An implementation of cryoCARE is publicly available as Scipion (de la Rosa-TrevĂ­n et al. 2016) plugin. Next, I will discuss the problem of self-supervised image denoising. With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied. However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system. In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach. Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012). In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks. I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE). I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction. The missing wedge artefacts in tomographic imaging originate in sparse-view imaging. Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images. However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space. I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients. Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents Summary iii Acknowledgements v 1 Introduction 1 1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3 1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4 1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5 1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8 1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11 2 Denoising in Electron Microscopy 15 2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19 2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19 2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21 2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23 2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25 2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27 2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29 2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31 2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32 2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33 3 Noise2Void: Self-Supervised Denoising 35 3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37 3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44 3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47 3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48 3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50 4 Fourier Image Transformer 53 4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55 4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57 4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57 4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59 4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60 4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60 4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61 4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64 4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Conclusions and Outlook 7

    Noise2Contrast: Multi-Contrast Fusion Enables Self-Supervised Tomographic Image Denoising

    Full text link
    Self-supervised image denoising techniques emerged as convenient methods that allow training denoising models without requiring ground-truth noise-free data. Existing methods usually optimize loss metrics that are calculated from multiple noisy realizations of similar images, e.g., from neighboring tomographic slices. However, those approaches fail to utilize the multiple contrasts that are routinely acquired in medical imaging modalities like MRI or dual-energy CT. In this work, we propose the new self-supervised training scheme Noise2Contrast that combines information from multiple measured image contrasts to train a denoising model. We stack denoising with domain-transfer operators to utilize the independent noise realizations of different image contrasts to derive a self-supervised loss. The trained denoising operator achieves convincing quantitative and qualitative results, outperforming state-of-the-art self-supervised methods by 4.7-11.0%/4.8-7.3% (PSNR/SSIM) on brain MRI data and by 43.6-50.5%/57.1-77.1% (PSNR/SSIM) on dual-energy CT X-ray microscopy data with respect to the noisy baseline. Our experiments on different real measured data sets indicate that Noise2Contrast training generalizes to other multi-contrast imaging modalities

    DenoiSeg: Joint Denoising and Segmentation

    Full text link
    Microscopy image analysis often requires the segmentation of objects, but training data for this task is typically scarce and hard to obtain. Here we propose DenoiSeg, a new method that can be trained end-to-end on only a few annotated ground truth segmentations. We achieve this by extending Noise2Void, a self-supervised denoising scheme that can be trained on noisy images alone, to also predict dense 3-class segmentations. The reason for the success of our method is that segmentation can profit from denoising, especially when performed jointly within the same network. The network becomes a denoising expert by seeing all available raw data, while co-learning to segment, even if only a few segmentation labels are available. This hypothesis is additionally fueled by our observation that the best segmentation results on high quality (very low noise) raw data are obtained when moderate amounts of synthetic noise are added. This renders the denoising-task non-trivial and unleashes the desired co-learning effect. We believe that DenoiSeg offers a viable way to circumvent the tremendous hunger for high quality training data and effectively enables few-shot learning of dense segmentations.Comment: 10 pages, 4 figures, 2 pages supplement (4 figures

    Fully Unsupervised Probabilistic Noise2Void

    Full text link
    Image denoising is the first step in many biomedical image analysis pipelines and Deep Learning (DL) based methods are currently best performing. A new category of DL methods such as Noise2Void or Noise2Self can be used fully unsupervised, requiring nothing but the noisy data. However, this comes at the price of reduced reconstruction quality. The recently proposed Probabilistic Noise2Void (PN2V) improves results, but requires an additional noise model for which calibration data needs to be acquired. Here, we present improvements to PN2V that (i) replace histogram based noise models by parametric noise models, and (ii) show how suitable noise models can be created even in the absence of calibration data. This is a major step since it actually renders PN2V fully unsupervised. We demonstrate that all proposed improvements are not only academic but indeed relevant.Comment: Accepted at ISBI 202

    Spatially Adaptive Self-Supervised Learning for Real-World Image Denoising

    Full text link
    Significant progress has been made in self-supervised image denoising (SSID) in the recent few years. However, most methods focus on dealing with spatially independent noise, and they have little practicality on real-world sRGB images with spatially correlated noise. Although pixel-shuffle downsampling has been suggested for breaking the noise correlation, it breaks the original information of images, which limits the denoising performance. In this paper, we propose a novel perspective to solve this problem, i.e., seeking for spatially adaptive supervision for real-world sRGB image denoising. Specifically, we take into account the respective characteristics of flat and textured regions in noisy images, and construct supervisions for them separately. For flat areas, the supervision can be safely derived from non-adjacent pixels, which are much far from the current pixel for excluding the influence of the noise-correlated ones. And we extend the blind-spot network to a blind-neighborhood network (BNN) for providing supervision on flat areas. For textured regions, the supervision has to be closely related to the content of adjacent pixels. And we present a locally aware network (LAN) to meet the requirement, while LAN itself is selectively supervised with the output of BNN. Combining these two supervisions, a denoising network (e.g., U-Net) can be well-trained. Extensive experiments show that our method performs favorably against state-of-the-art SSID methods on real-world sRGB photographs. The code is available at https://github.com/nagejacob/SpatiallyAdaptiveSSID.Comment: CVPR 2023 Camera Read
    • …
    corecore