13 research outputs found

    Guided Proofreading of Automatic Segmentations for Connectomics

    Full text link
    Automatic cell image segmentation methods in connectomics produce merge and split errors, which require correction through proofreading. Previous research has identified the visual search for these errors as the bottleneck in interactive proofreading. To aid error correction, we develop two classifiers that automatically recommend candidate merges and splits to the user. These classifiers use a convolutional neural network (CNN) that has been trained with errors in automatic segmentations against expert-labeled ground truth. Our classifiers detect potentially-erroneous regions by considering a large context region around a segmentation boundary. Corrections can then be performed by a user with yes/no decisions, which reduces variation of information 7.5x faster than previous proofreading methods. We also present a fully-automatic mode that uses a probability threshold to make merge/split decisions. Extensive experiments using the automatic approach and comparing performance of novice and expert users demonstrate that our method performs favorably against state-of-the-art proofreading methods on different connectomics datasets.Comment: Supplemental material available at http://rhoana.org/guidedproofreading/supplemental.pd

    Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images

    Full text link
    Automated sample preparation and electron microscopy enables acquisition of very large image data sets. These technical advances are of special importance to the field of neuroanatomy, as 3D reconstructions of neuronal processes at the nm scale can provide new insight into the fine grained structure of the brain. Segmentation of large-scale electron microscopy data is the main bottleneck in the analysis of these data sets. In this paper we present a pipeline that provides state-of-the art reconstruction performance while scaling to data sets in the GB-TB range. First, we train a random forest classifier on interactive sparse user annotations. The classifier output is combined with an anisotropic smoothing prior in a Conditional Random Field framework to generate multiple segmentation hypotheses per image. These segmentations are then combined into geometrically consistent 3D objects by segmentation fusion. We provide qualitative and quantitative evaluation of the automatic segmentation and demonstrate large-scale 3D reconstructions of neuronal processes from a 27,000\mathbf{27,000} μm3\mathbf{\mu m^3} volume of brain tissue over a cube of 30  μm\mathbf{30 \; \mu m} in each dimension corresponding to 1000 consecutive image sections. We also introduce Mojo, a proofreading tool including semi-automated correction of merge errors based on sparse user scribbles

    Probabilistic image registration and anomaly detection by nonlinear warping

    No full text
    Automatic, defect tolerant registration of transmission electron microscopy (TEM) images poses an important and challenging problem for biomedical image analysis, e.g. in computational neuroanatomy. In this paper we demonstrate a fully automatic stitching and distortion correction method for TEM images and propose a probabilistic approach for image registration that implicitly detects image defects due to sample preparation and image acquisition. The approach uses a polynomial kernel expansion to estimate a non-linear image transformation based on intensities and spatial features. Corresponding points in the images are not determined beforehand, but they are estimated via an EM-algorithm during the registration process which is preferable in the case of (noisy) TEM images. Our registration model is successfully applied to two large image stacks of serial section TEM images acquired from brain tissue samples in a computational neuroanatomy project and shows significant improvement over existing image registration methods on these large datasets

    Trainable_Segmentation: Release v3.1.2

    No full text
    Major changes: Fix java version problem with release 3.1.1. Increase maximum number of classes to 100 (release 3.1.1). Add method to enable/disable a feature by name (release 3.1.1). Add library methods to add training instances from a label image (release 3.1.1). Much code cleanup by Mohamed Ezzat from DevFactory (release 3.1.1)

    Scalable Interactive Visualization for Connectomics

    No full text
    Connectomics has recently begun to image brain tissue at nanometer resolution, which produces petabytes of data. This data must be aligned, labeled, proofread, and formed into graphs, and each step of this process requires visualization for human verification. As such, we present the BUTTERFLY middleware, a scalable platform that can handle massive data for interactive visualization in connectomics. Our platform outputs image and geometry data suitable for hardware-accelerated rendering, and abstracts low-level data wrangling to enable faster development of new visualizations. We demonstrate scalability and extendability with a series of open source Web-based applications for every step of the typical connectomics workflow: data management and storage, informative queries, 2D and 3D visualizations, interactive editing, and graph-based analysis. We report design choices for all developed applications and describe typical scenarios of isolated and combined use in everyday connectomics research. In addition, we measure and optimize rendering throughput—from storage to display—in quantitative experiments. Finally, we share insights, experiences, and recommendations for creating an open source data management and interactive visualization platform for connectomics
    corecore