3,035 research outputs found
Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks
Fluorescence microscopy images usually show severe anisotropy in axial versus
lateral resolution. This hampers downstream processing, i.e. the automatic
extraction of quantitative biological data. While deconvolution methods and
other techniques to address this problem exist, they are either time consuming
to apply or limited in their ability to remove anisotropy. We propose a method
to recover isotropic resolution from readily acquired anisotropic data. We
achieve this using a convolutional neural network that is trained end-to-end
from the same anisotropic body of data we later apply the network to. The
network effectively learns to restore the full isotropic resolution by
restoring the image under a trained, sample specific image prior. We apply our
method to synthetic and real datasets and show that our results improve
on results from deconvolution and state-of-the-art super-resolution techniques.
Finally, we demonstrate that a standard 3D segmentation pipeline performs on
the output of our network with comparable accuracy as on the full isotropic
data
Self-Supervised Super-Resolution Approach for Isotropic Reconstruction of 3D Electron Microscopy Images from Anisotropic Acquisition
Three-dimensional electron microscopy (3DEM) is an essential technique to
investigate volumetric tissue ultra-structure. Due to technical limitations and
high imaging costs, samples are often imaged anisotropically, where resolution
in the axial direction () is lower than in the lateral directions .
This anisotropy 3DEM can hamper subsequent analysis and visualization tasks. To
overcome this limitation, we propose a novel deep-learning (DL)-based
self-supervised super-resolution approach that computationally reconstructs
isotropic 3DEM from the anisotropic acquisition. The proposed DL-based
framework is built upon the U-shape architecture incorporating
vision-transformer (ViT) blocks, enabling high-capability learning of local and
global multi-scale image dependencies. To train the tailored network, we employ
a self-supervised approach. Specifically, we generate pairs of anisotropic and
isotropic training datasets from the given anisotropic 3DEM data. By feeding
the given anisotropic 3DEM dataset in the trained network through our proposed
framework, the isotropic 3DEM is obtained. Importantly, this isotropic
reconstruction approach relies solely on the given anisotropic 3DEM dataset and
does not require pairs of co-registered anisotropic and isotropic 3DEM training
datasets. To evaluate the effectiveness of the proposed method, we conducted
experiments using three 3DEM datasets acquired from brain. The experimental
results demonstrated that our proposed framework could successfully reconstruct
isotropic 3DEM from the anisotropic acquisition
Machine learning of hierarchical clustering to segment 2D and 3D images
We aim to improve segmentation through the use of machine learning tools
during region agglomeration. We propose an active learning approach for
performing hierarchical agglomerative segmentation from superpixels. Our method
combines multiple features at all scales of the agglomerative process, works
for data with an arbitrary number of dimensions, and scales to very large
datasets. We advocate the use of variation of information to measure
segmentation accuracy, particularly in 3D electron microscopy (EM) images of
neural tissue, and using this metric demonstrate an improvement over competing
algorithms in EM and natural images.Comment: 15 pages, 8 figure
Segmentation in large-scale cellular electron microscopy with deep learning: A literature survey
Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of manual segmentation of large-scale EM datasets, automated segmentation approaches are crucial. This review focuses on the progress of deep learning-based segmentation techniques in large-scale cellular EM throughout the last six years, during which significant progress has been made in both semantic and instance segmentation. A detailed account is given for the key datasets that contributed to the proliferation of deep learning in 2D and 3D EM segmentation. The review covers supervised, unsupervised, and self-supervised learning methods and examines how these algorithms were adapted to the task of segmenting cellular and sub-cellular structures in EM images. The special challenges posed by such images, like heterogeneity and spatial complexity, and the network architectures that overcame some of them are described. Moreover, an overview of the evaluation measures used to benchmark EM datasets in various segmentation tasks is provided. Finally, an outlook of current trends and future prospects of EM segmentation is given, especially with large-scale models and unlabeled images to learn generic features across EM datasets
- …