2,315 research outputs found
Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction
In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data.
Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations.
Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT).
In the next paragraphs I will summarize the individual contributions briefly.
Electron microscopy is the go to method for high-resolution images in biological research.
Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses.
However, slow scanning speeds are required to obtain SEM images of sufficient quality.
In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks.
Once such a network is trained, it can be applied to noisy data to restore high quality images.
With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in - to -fold imaging speedups for SEM imaging.
In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions.
However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images.
Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually.
To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series.
An implementation of cryoCARE is publicly available as Scipion (de la Rosa-TrevĂn et al. 2016) plugin.
Next, I will discuss the problem of self-supervised image denoising.
With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied.
However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system.
In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach.
Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012).
In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks.
I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE).
I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction.
The missing wedge artefacts in tomographic imaging originate in sparse-view imaging.
Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images.
However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space.
I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients.
Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents
Summary iii
Acknowledgements v
1 Introduction 1
1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3
1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4
1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8
1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11
2 Denoising in Electron Microscopy 15
2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19
2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21
2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23
2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25
2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27
2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29
2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31
2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32
2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33
3 Noise2Void: Self-Supervised Denoising 35
3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37
3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41
3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44
3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47
3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48
3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50
4 Fourier Image Transformer 53
4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55
4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56
4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57
4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57
4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59
4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60
4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60
4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61
4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64
4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66
4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5 Conclusions and Outlook 7
The Deep Neural Network Approach to the Reference Class Problem
Methods of machine learning (ML) are gradually complementing and sometimes even replacing methods of classical statistics in science. This raises the question whether ML faces the same methodological problems as classical statistics. This paper sheds light on this question by investigating a long-standing challenge to classical statistics: the reference class problem (RCP). It arises whenever statistical evidence is applied to an individual object, since the individual belongs to several reference classes and evidence might vary across them. Thus, the problem consists in choosing a suitable reference class for the individual. I argue that deep neural networks (DNNs) are able to overcome specific instantiations of the RCP. Whereas the criteria of narrowness, reliability, and homogeneity, that have been proposed to determine a suitable reference class, pose an inextricable tradeoff to classical statistics, DNNs are able to satisfy them in some situations. On the one hand, they can exploit the high dimensionality in big-data settings. I argue that this corresponds to the criteria of narrowness and reliability. On the other hand, ML research indicates that DNNs are generally not susceptible to overfitting. I argue that this property is related to a particular form of homogeneity. Taking both aspects together reveals that there are specific settings in which DNNs can overcome the RCP
Machine Learning in Public Health and the Prediction Intervention Gap
This chapter examines the epistemic value of (purely) predictive ML models for public health. By discussing a novel strand of research at the intersection of ML and economics that recasts policy problems as prediction problems, we argue – against skeptics – that predictive models can indeed be a useful guide for policy interventions, provided that certain conditions hold. Using behavioral approaches to policymaking such as Nudge theory as a contrast class, we carve out a distinct feature of the ML approach to public policy problems: the ML model itself may turn into a cognitive intervention. In underscoring the epistemic value of predictive models, we also highlight the importance of taking a broader perspective on what constitutes good evidence for policymaking. Moreover, by focusing on public health, we also contribute to the understanding of the specific methodological challenges of ML-driven science outside of traditional success areas
Machine Learning in Public Health and the Prediction-Intervention Gap
This chapter examines the epistemic value of (purely) predictive ML models for public health. By discussing a novel strand of research at the intersection of ML and economics that recasts policy problems as prediction problems, we argue – against skeptics – that predictive models can indeed be a useful guide for policy interventions, provided that certain conditions hold. Using behavioral approaches to policymaking such as Nudge theory as a contrast class, we carve out a distinct feature of the ML approach to public policy problems: the ML model itself may turn into a cognitive intervention. In underscoring the epistemic value of predictive models, we also highlight the importance of taking a broader perspective on what constitutes good evidence for policymaking. Moreover, by focusing on public health, we also contribute to the understanding of the specific methodological challenges of ML-driven science outside of traditional success areas
DenoiSeg: Joint Denoising and Segmentation
Microscopy image analysis often requires the segmentation of objects, but
training data for this task is typically scarce and hard to obtain. Here we
propose DenoiSeg, a new method that can be trained end-to-end on only a few
annotated ground truth segmentations. We achieve this by extending Noise2Void,
a self-supervised denoising scheme that can be trained on noisy images alone,
to also predict dense 3-class segmentations. The reason for the success of our
method is that segmentation can profit from denoising, especially when
performed jointly within the same network. The network becomes a denoising
expert by seeing all available raw data, while co-learning to segment, even if
only a few segmentation labels are available. This hypothesis is additionally
fueled by our observation that the best segmentation results on high quality
(very low noise) raw data are obtained when moderate amounts of synthetic noise
are added. This renders the denoising-task non-trivial and unleashes the
desired co-learning effect. We believe that DenoiSeg offers a viable way to
circumvent the tremendous hunger for high quality training data and effectively
enables few-shot learning of dense segmentations.Comment: 10 pages, 4 figures, 2 pages supplement (4 figures
Recommended from our members
Modular Current Stimulation System for Pre-clinical Studies.
Electric stimulators with precise and reliable outputs are an indispensable part of electrophysiological research. From single cells to deep brain or neuromuscular tissue, there are diverse targets for electrical stimulation. Even though commercial systems are available, we state the need for a low-cost, high precision, functional, and modular (hardware, firmware, and software) current stimulation system with the capacity to generate stable and complex waveforms for pre-clinical research. The system presented in this study is a USB controlled 4-channel modular current stimulator that can be expanded and generate biphasic arbitrary waveforms with 16-bit resolution, high temporal precision (ÎĽs), and passive charge balancing: the NES STiM (Neuro Electronic Systems Stimulator). We present a detailed description of the system's structural design, the controlling software, reliability test, and the pre-clinical studies [deep brain stimulation (DBS) in hemi-PD rat model] in which it was utilized. The NES STiM has been tested with MacOS and Windows operating systems. Interfaces to MATLAB source codes are provided. The system is inexpensive, relatively easy to build and can be assembled quickly. We hope that the NES STiM will be used in a wide variety of neurological applications such as Functional Electrical Stimulation (FES), DBS and closed loop neurophysiological research
Leveraging Self-supervised Denoising for Image Segmentation
Deep learning (DL) has arguably emerged as the method of choice for the
detection and segmentation of biological structures in microscopy images.
However, DL typically needs copious amounts of annotated training data that is
for biomedical projects typically not available and excessively expensive to
generate. Additionally, tasks become harder in the presence of noise, requiring
even more high-quality training data. Hence, we propose to use denoising
networks to improve the performance of other DL-based image segmentation
methods. More specifically, we present ideas on how state-of-the-art
self-supervised CARE networks can improve cell/nuclei segmentation in
microscopy data. Using two state-of-the-art baseline methods, U-Net and
StarDist, we show that our ideas consistently improve the quality of resulting
segmentations, especially when only limited training data for noisy micrographs
are available.Comment: accepted at ISBI 202
- …