13 research outputs found
Guided Proofreading of Automatic Segmentations for Connectomics
Automatic cell image segmentation methods in connectomics produce merge and
split errors, which require correction through proofreading. Previous research
has identified the visual search for these errors as the bottleneck in
interactive proofreading. To aid error correction, we develop two classifiers
that automatically recommend candidate merges and splits to the user. These
classifiers use a convolutional neural network (CNN) that has been trained with
errors in automatic segmentations against expert-labeled ground truth. Our
classifiers detect potentially-erroneous regions by considering a large context
region around a segmentation boundary. Corrections can then be performed by a
user with yes/no decisions, which reduces variation of information 7.5x faster
than previous proofreading methods. We also present a fully-automatic mode that
uses a probability threshold to make merge/split decisions. Extensive
experiments using the automatic approach and comparing performance of novice
and expert users demonstrate that our method performs favorably against
state-of-the-art proofreading methods on different connectomics datasets.Comment: Supplemental material available at
http://rhoana.org/guidedproofreading/supplemental.pd
Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images
Automated sample preparation and electron microscopy enables acquisition of
very large image data sets. These technical advances are of special importance
to the field of neuroanatomy, as 3D reconstructions of neuronal processes at
the nm scale can provide new insight into the fine grained structure of the
brain. Segmentation of large-scale electron microscopy data is the main
bottleneck in the analysis of these data sets. In this paper we present a
pipeline that provides state-of-the art reconstruction performance while
scaling to data sets in the GB-TB range. First, we train a random forest
classifier on interactive sparse user annotations. The classifier output is
combined with an anisotropic smoothing prior in a Conditional Random Field
framework to generate multiple segmentation hypotheses per image. These
segmentations are then combined into geometrically consistent 3D objects by
segmentation fusion. We provide qualitative and quantitative evaluation of the
automatic segmentation and demonstrate large-scale 3D reconstructions of
neuronal processes from a volume of brain
tissue over a cube of in each dimension corresponding to
1000 consecutive image sections. We also introduce Mojo, a proofreading tool
including semi-automated correction of merge errors based on sparse user
scribbles
Probabilistic image registration and anomaly detection by nonlinear warping
Automatic, defect tolerant registration of transmission electron microscopy (TEM) images poses an important and challenging problem for biomedical image analysis, e.g. in computational neuroanatomy. In this paper we demonstrate a fully automatic stitching and distortion correction method for TEM images and propose a probabilistic approach for image registration that implicitly detects image defects due to sample preparation and image acquisition. The approach uses a polynomial kernel expansion to estimate a non-linear image transformation based on intensities and spatial features. Corresponding points in the images are not determined beforehand, but they are estimated via an EM-algorithm during the registration process which is preferable in the case of (noisy) TEM images. Our registration model is successfully applied to two large image stacks of serial section TEM images acquired from brain tissue samples in a computational neuroanatomy project and shows significant improvement over existing image registration methods on these large datasets
Trainable_Segmentation: Release v3.1.2
Major changes:
Fix java version problem with release 3.1.1.
Increase maximum number of classes to 100 (release 3.1.1).
Add method to enable/disable a feature by name (release 3.1.1).
Add library methods to add training instances from a label image (release 3.1.1).
Much code cleanup by Mohamed Ezzat from DevFactory (release 3.1.1)
Recommended from our members
Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification
Availability and Implementation
TWS is distributed as open-source software as part of the Fiji image processing distribution of ImageJ at http://imagej.net/Trainable_Weka_Segmentation.Supplementary information are available at Bioinformatics online at https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/bioinformatics/33/15/10.1093_bioinformatics_btx180/3/btx180_supplementary_tws-manual.pdf?Expires=1611354059&Signature=vy5VA5X0EGaUKbtapTr65TZ6m3XlL9mQU8jF4ZxIZij3uaDMuWsto5xxg3joksyy0~k6SG3zlP-RBiRbLvCmx6lMqZKFsENENf5y9AcYg7hT7jT2c7Ic66IKFx9qFWnc~ij228z6mGnyoOT8B1P3QI0hyLu96Kysjbh6buBcbVOLbUQ90RPvx26IBDpv6vecG7rVKdUBBa-kMSoMmo75r-1F9vupHDm5bn~m6~JNpnVertSDuiZDEVqCfFfajOMDH8vkxakxtwq20Bou7MTHaX2AMsfKAqlTKnElNMlsLHVK8KqKDs7ONeqsJCllm2w-u--C8mhqC3PcaM-ym0sKmw__&Key-Pair-Id=APKAIE5G5CRDK6RD3PGASummary
State-of-the-art light and electron microscopes are capable of acquiring large image datasets, but quantitatively evaluating the data often involves manually annotating structures of interest. This process is time-consuming and often a major bottleneck in the evaluation pipeline. To overcome this problem, we have introduced the Trainable Weka Segmentation (TWS), a machine learning tool that leverages a limited number of manual annotations in order to train a classifier and segment the remaining data automatically. In addition, TWS can provide unsupervised segmentation learning schemes (clustering) and can be customized to employ user-designed image features or classifiers.
Availability and Implementation
TWS is distributed as open-source software as part of the Fiji image processing distribution of ImageJ at http://imagej.net/Trainable_Weka_Segmentation.
Supplementary information
Supplementary data are available at Bioinformatics online
Scalable Interactive Visualization for Connectomics
Connectomics has recently begun to image brain tissue at nanometer resolution, which produces petabytes of data. This data must be aligned, labeled, proofread, and formed into graphs, and each step of this process requires visualization for human verification. As such, we present the BUTTERFLY middleware, a scalable platform that can handle massive data for interactive visualization in connectomics. Our platform outputs image and geometry data suitable for hardware-accelerated rendering, and abstracts low-level data wrangling to enable faster development of new visualizations. We demonstrate scalability and extendability with a series of open source Web-based applications for every step of the typical connectomics workflow: data management and storage, informative queries, 2D and 3D visualizations, interactive editing, and graph-based analysis. We report design choices for all developed applications and describe typical scenarios of isolated and combined use in everyday connectomics research. In addition, we measure and optimize rendering throughput—from storage to display—in quantitative experiments. Finally, we share insights, experiences, and recommendations for creating an open source data management and interactive visualization platform for connectomics