40 research outputs found
Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction
In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data.
Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations.
Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT).
In the next paragraphs I will summarize the individual contributions briefly.
Electron microscopy is the go to method for high-resolution images in biological research.
Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses.
However, slow scanning speeds are required to obtain SEM images of sufficient quality.
In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks.
Once such a network is trained, it can be applied to noisy data to restore high quality images.
With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in - to -fold imaging speedups for SEM imaging.
In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions.
However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images.
Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually.
To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series.
An implementation of cryoCARE is publicly available as Scipion (de la Rosa-TrevĂn et al. 2016) plugin.
Next, I will discuss the problem of self-supervised image denoising.
With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied.
However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system.
In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach.
Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012).
In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks.
I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE).
I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction.
The missing wedge artefacts in tomographic imaging originate in sparse-view imaging.
Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images.
However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space.
I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients.
Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents
Summary iii
Acknowledgements v
1 Introduction 1
1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3
1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4
1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8
1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11
2 Denoising in Electron Microscopy 15
2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19
2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21
2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23
2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25
2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27
2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29
2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31
2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32
2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33
3 Noise2Void: Self-Supervised Denoising 35
3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37
3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41
3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44
3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47
3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48
3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50
4 Fourier Image Transformer 53
4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55
4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56
4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57
4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57
4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59
4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60
4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60
4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61
4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64
4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66
4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5 Conclusions and Outlook 7
Noise2Inverse: Self-supervised deep convolutional denoising for tomography
Recovering a high-quality image from noisy indirect measurements is an
important problem with many applications. For such inverse problems, supervised
deep convolutional neural network (CNN)-based denoising methods have shown
strong results, but the success of these supervised methods critically depends
on the availability of a high-quality training dataset of similar measurements.
For image denoising, methods are available that enable training without a
separate training dataset by assuming that the noise in two different pixels is
uncorrelated. However, this assumption does not hold for inverse problems,
resulting in artifacts in the denoised images produced by existing methods.
Here, we propose Noise2Inverse, a deep CNN-based denoising method for linear
image reconstruction algorithms that does not require any additional clean or
noisy data. Training a CNN-based denoiser is enabled by exploiting the noise
model to compute multiple statistically independent reconstructions. We develop
a theoretical framework which shows that such training indeed obtains a
denoising CNN, assuming the measured noise is element-wise independent and
zero-mean. On simulated CT datasets, Noise2Inverse demonstrates an improvement
in peak signal-to-noise ratio and structural similarity index compared to
state-of-the-art image denoising methods and conventional reconstruction
methods, such as Total-Variation Minimization. We also demonstrate that the
method is able to significantly reduce noise in challenging real-world
experimental datasets.Comment: This paper appears in: IEEE Transactions on Computational Imaging On
page(s): 1320-1335 Print ISSN: 2333-9403 Online ISSN: 2333-9403 Digital
Object Identifier: 10.1109/TCI.2020.301964
Directional Sinogram Inpainting for Limited Angle Tomography
In this paper we propose a new joint model for the reconstruction of tomography data under limited angle sampling regimes. In many applications of Tomography, e.g. Electron Microscopy and Mammography, physical limitations on acquisition lead to regions of data which cannot be sampled. Depending on the severity of the restriction, reconstructions can contain severe, characteristic, artefacts. Our model aims to address these artefacts by inpainting the missing data simultaneously with the reconstruction. Numerically, this problem naturally evolves to require the minimisation of a non-convex and non-smooth functional so we review recent work in this topic and extend results to fit an alternating (block) descent framework. \oldtext{We illustrate the effectiveness of this approach with numerical experiments on two synthetic datasets and one Electron Microscopy dataset.} \newtext{We perform numerical experiments on two synthetic datasets and one Electron Microscopy dataset. Our results show consistently that the joint inpainting and reconstruction framework can recover cleaner and more accurate structural information than the current state of the art methods
Recommended from our members
Directional sinogram inpainting for limited angle tomography
In this paper we propose a new joint model for the reconstruction of
tomography data under limited angle sampling regimes. In many applications of
Tomography, e.g. Electron Microscopy and Mammography, physical limitations on
acquisition lead to regions of data which cannot be sampled. Depending on the
severity of the restriction, reconstructions can contain severe,
characteristic, artefacts. Our model aims to address these artefacts by
inpainting the missing data simultaneously with the reconstruction.
Numerically, this problem naturally evolves to require the minimisation of a
non-convex and non-smooth functional so we review recent work in this topic and
extend results to fit an alternating (block) descent framework. We illustrate
the effectiveness of this approach with numerical experiments on two synthetic
datasets and one Electron Microscopy dataset.Cantab Capital Institute for the Mathematics of Information
PIHC innovation fund of the Technical Medical Centre of UT
Dutch 4TU programme Precision Medicine
Netherlands Organization for Scientific Research (NWO), project 639.073.506
Henslow Research Fellowship at Girton College, Cambridge
Clare College Junior Research Fellowshi
Recommended from our members
Mathematical Challenges in Electron Microscopy
Development of electron microscopes first started nearly 100 years ago and they are now a mature imaging modality with many applications and vast potential for the future. The principal feature of electron microscopes is their resolution; they can be up to 1000 times more powerful than a visible light microscope and resolve even the smallest atoms. Furthermore, electron microscopes are also sensitive to many material properties due to the very rich interactions between electrons and other matter. Because of these capabilities, electron microscopy is used in applications as diverse as drug discovery, computer chip manufacture, and the development of solar cells.
In parallel to this, the mathematical field of inverse problems has also evolved dramatically. Many new methods have been introduced to improve the recovery of unknown structures from indirect data, typically an ill-posed problem. In particular, sparsity promoting functionals such as the total variation and its extensions have been shown to be very powerful for recovering accurate physical quantities from very little and/or poor quality data. While sparsity-promoting reconstruction methods are powerful, they can also be slow, especially in a big-data setting. This trade-off forms an eternal cycle as new numerical tools are found and more powerful models are developed.
The work presented in this thesis aims to marry the tools of inverse problems with the problems of electron microscopy: bringing state-of-the-art image processing techniques to bear on challenges specific to electron microscopy, developing new optimisation methods for these problems, and modelling new inverse problems to extend the capabilities of existing microscopes. One focus is the application of a directional total variation to overcome the limited angle problem in electron tomography, another is the proposal of a new inverse problem for the reconstruction of 3D strain tensor fields from electron microscopy diffraction data. The remaining contributions target numerical aspects of inverse problems, from new algorithms for non-convex problems to convex optimisation with adaptive meshes.Cantab Capital Institute for Mathematics of Informatio
Statistical analysis and modeling for biomolecular structures
Most of the recent studies on biomolecules address their three dimensional structure since it is closely related to their functions in a biological system. Determination of structure of biomolecules can be done by using various methods, which rely on data from various experimental instruments or on computational approaches to previously obtained data or datasets. Single particle reconstruction using electron microscopic images of macromolecules has proven resource-wise to be useful and affordable for determining their molecular structure in increasing details.
The main goal of this thesis is to contribute to the single particle reconstruction methodology, by adding a process of denoising in the analysis of the cryo-electron microscopic images. First, the denoising methods are briefly surveyed and their efficiencies for filtering cryo-electron microscopic images are evaluated. In this thesis, the focus has been set to information theoretic minimum description length (MDL) principle for coding efficiently the essential part of the signal. This approach can also be applied to reduce noise in signals and here it is used to develop a novel denoising method for cryo-electron microscopic images. An existing denoising method has been modified to suit the given problem in single particle reconstruction. In addition, a more general denoising method has been developed, discovering a novel way to find model class by using the MDL principle. This method was then thoroughly tested and compared with co-existing methods in order to evaluate the utility of denoising in single particle reconstruction.
A secondary goal in the research for this thesis deals with studying protein oligomerisation, using computational approaches. The focus has been to recognize interacting residues in proteins for oligomerization and to model the interaction site for hantavirus N-protein. In order to unravel the interaction structure, the approach has been to understand the phenomenon of protein folding towards quaternary structure.reviewe
Transmission electron tomography: quality assessment and enhancement for three-dimensional imaging of nanostructures
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach