186 research outputs found
Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction
We present a method combining affinity prediction with region agglomeration,
which improves significantly upon the state of the art of neuron segmentation
from electron microscopy (EM) in accuracy and scalability. Our method consists
of a 3D U-NET, trained to predict affinities between voxels, followed by
iterative region agglomeration. We train using a structured loss based on
MALIS, encouraging topologically correct segmentations obtained from affinity
thresholding. Our extension consists of two parts: First, we present a
quasi-linear method to compute the loss gradient, improving over the original
quadratic algorithm. Second, we compute the gradient in two separate passes to
avoid spurious gradient contributions in early training stages. Our predictions
are accurate enough that simple learning-free percentile-based agglomeration
outperforms more involved methods used earlier on inferior predictions. We
present results on three diverse EM datasets, achieving relative improvements
over previous results of 27%, 15%, and 250%. Our findings suggest that a single
method can be applied to both nearly isotropic block-face EM data and
anisotropic serial sectioned EM data. The runtime of our method scales linearly
with the size of the volume and achieves a throughput of about 2.6 seconds per
megavoxel, qualifying our method for the processing of very large datasets
Machine learning of hierarchical clustering to segment 2D and 3D images
We aim to improve segmentation through the use of machine learning tools
during region agglomeration. We propose an active learning approach for
performing hierarchical agglomerative segmentation from superpixels. Our method
combines multiple features at all scales of the agglomerative process, works
for data with an arbitrary number of dimensions, and scales to very large
datasets. We advocate the use of variation of information to measure
segmentation accuracy, particularly in 3D electron microscopy (EM) images of
neural tissue, and using this metric demonstrate an improvement over competing
algorithms in EM and natural images.Comment: 15 pages, 8 figure
Guided Proofreading of Automatic Segmentations for Connectomics
Automatic cell image segmentation methods in connectomics produce merge and
split errors, which require correction through proofreading. Previous research
has identified the visual search for these errors as the bottleneck in
interactive proofreading. To aid error correction, we develop two classifiers
that automatically recommend candidate merges and splits to the user. These
classifiers use a convolutional neural network (CNN) that has been trained with
errors in automatic segmentations against expert-labeled ground truth. Our
classifiers detect potentially-erroneous regions by considering a large context
region around a segmentation boundary. Corrections can then be performed by a
user with yes/no decisions, which reduces variation of information 7.5x faster
than previous proofreading methods. We also present a fully-automatic mode that
uses a probability threshold to make merge/split decisions. Extensive
experiments using the automatic approach and comparing performance of novice
and expert users demonstrate that our method performs favorably against
state-of-the-art proofreading methods on different connectomics datasets.Comment: Supplemental material available at
http://rhoana.org/guidedproofreading/supplemental.pd
Learned versus Hand-Designed Feature Representations for 3d Agglomeration
For image recognition and labeling tasks, recent results suggest that machine
learning methods that rely on manually specified feature representations may be
outperformed by methods that automatically derive feature representations based
on the data. Yet for problems that involve analysis of 3d objects, such as mesh
segmentation, shape retrieval, or neuron fragment agglomeration, there remains
a strong reliance on hand-designed feature descriptors. In this paper, we
evaluate a large set of hand-designed 3d feature descriptors alongside features
learned from the raw data using both end-to-end and unsupervised learning
techniques, in the context of agglomeration of 3d neuron fragments. By
combining unsupervised learning techniques with a novel dynamic pooling scheme,
we show how pure learning-based methods are for the first time competitive with
hand-designed 3d shape descriptors. We investigate data augmentation strategies
for dramatically increasing the size of the training set, and show how
combining both learned and hand-designed features leads to the highest
accuracy
A Generalized Framework for Agglomerative Clustering of Signed Graphs applied to Instance Segmentation
We propose a novel theoretical framework that generalizes algorithms for
hierarchical agglomerative clustering to weighted graphs with both attractive
and repulsive interactions between the nodes. This framework defines GASP, a
Generalized Algorithm for Signed graph Partitioning, and allows us to explore
many combinations of different linkage criteria and cannot-link constraints. We
prove the equivalence of existing clustering methods to some of those
combinations, and introduce new algorithms for combinations which have not been
studied. An extensive comparison is performed to evaluate properties of the
clustering algorithms in the context of instance segmentation in images,
including robustness to noise and efficiency. We show how one of the new
algorithms proposed in our framework outperforms all previously known
agglomerative methods for signed graphs, both on the competitive CREMI 2016 EM
segmentation benchmark and on the CityScapes dataset.Comment: 19 pages, 8 figures, 6 table
A benchmark for epithelial cell tracking
Segmentation and tracking of epithelial cells in light microscopy (LM) movies of developing tissue is an abundant task in cell- and developmental biology. Epithelial cells are densely packed cells that form a honeycomb-like grid. This dense packing distinguishes membrane-stained epithelial cells from the types of objects recent cell tracking benchmarks have focused on, like cell nuclei and freely moving individual cells. While semi-automated tools for segmentation and tracking of epithelial cells are available to biologists, common tools rely on classical watershed based segmentation and engineered tracking heuristics, and entail a tedious phase of manual curation. However, a different kind of densely packed cell imagery has become a focus of recent computer vision research, namely electron microscopy (EM) images of neurons. In this work we explore the benefits of two recent neuron EM segmentation methods for epithelial cell tracking in light microscopy. In particular we adapt two different deep learning approaches for neuron segmentation, namely Flood Filling Networks and MALA, to epithelial cell tracking. We benchmark these on a dataset of eight movies with up to 200 frames. We compare to Moral Lineage Tracing, a combinatorial optimization approach that recently claimed state of the art results for epithelial cell tracking. Furthermore, we compare to Tissue Analyzer, an off-the-shelf tool used by Biologists that serves as our baseline
- …