25 research outputs found

    Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction

    Full text link
    We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-NET, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of about 2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets

    Computational methods in Connectomics

    Get PDF

    Computational methods in Connectomics

    Get PDF

    Doctor of Philosophy in Computing

    Get PDF
    dissertationImage segmentation is the problem of partitioning an image into disjoint segments that are perceptually or semantically homogeneous. As one of the most fundamental computer vision problems, image segmentation is used as a primary step for high-level vision tasks, such as object recognition and image understanding, and has even wider applications in interdisciplinary areas, such as longitudinal brain image analysis. Hierarchical models have gained popularity as a key component in image segmentation frameworks. By imposing structures, a hierarchical model can efficiently utilize features from larger image regions and make optimal inference for final segmentation feasible. We develop a hierarchical merge tree (HMT) model for image segmentation. Motivated by the application in large-scale segmentation of neuronal structures in electron microscopy (EM) images, our model provides a compact representation of region merging hypotheses and utilizes higher order information for efficient segmentation inference. Taking advantage of supervised learning, our model is free from parameter tuning and outperforms previous state-of-the-art methods on both two-dimensional (2D) and three-dimensional EM image data sets without any change. We also extend HMT to the hierarchical merge forest (HMF) model. By identifying region correspondences, HMF utilizes inter-section information to correct intra-section errors and improves 2D EM segmentation accuracy. HMT is a generic segmentation model. We demonstrate this by applying it to natural image segmentation problems. We propose a constrained conditional model formulation with a globally optimal inference algorithm for HMT and an iterative merge tree sampling algorithm that significantly improves its performance. Experimental results show our approach achieves state-of-the-art accuracy for object-independent image segmentation. Finally, we propose a semi-supervised HMT (SSHMT) model to reduce the high demand for labeled data by supervised learning. We introduce a differentiable unsupervised loss term that enforces consistent boundary predictions and develop a Bayesian learning model that combines supervised and unsupervised information. We show that with a very small amount of labeled data, SSHMT consistently performs close to the supervised HMT with full labeled data sets and significantly outperforms HMT trained with the same labeled subsets

    Accurate and versatile 3D segmentation of plant tissues at cellular resolution

    Get PDF
    Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface

    Isotropic Reconstruction of Neural Morphology from Large Non-Isotropic 3D Electron Microscopy

    Get PDF
    Neuroscientists are increasingly convinced that it is necessary to reconstruct the precise wiring and synaptic connectivity of biological nervous systems to eventually decipher their function. The urge to reconstruct ever larger and more complete synaptic wiring diagrams of animal brains has created an entire new subfield of neuroscience: Connectomics. The reconstruction of connectomes is difficult because neurons are both large and small. They project across distances of many millimeters but each individual neurite can be as thin as a few tens of nanomaters. In order to reconstruct all neurites in densely packed neural tissues, it is necessary to image this tissue at nanometer resolution which, today, is only possible with 3D electron microscopy (3D-EM). Over the last decade, 3D-EM has become significantly more reliable than ever before. Today, it is possible to routinely image volumes of up to a cubic millimeter, covering the entire brain of small model organisms such as that of the fruit fly Drosophila melanogaster. These volumes contain tens or hundreds of tera-voxels and cannot be analyzed manually. Efficient computational methods and tools are needed for all stages of connectome reconstruction: (1) assembling distortion and artifact free volumes from serial section EM, (2) precise automatic recon- struction of neurons and synapses, and (3) efficient and user-friendly solutions for visualization and interactive proofreading. In this dissertation, I present new computational methods and tools that I developed to address previously unsolved problems covering all of the above mentioned aspects of EM connectomics. In chapter 2, I present a new method to correct for planar and non-planar axial distortion and to sort unordered section series. This method was instrumental for the first ever acquisition of a complete brain of an adult Drosophila melanogaster imaged with 3D-EM. Machine learning, in particular deep learning, and the availability of public training and test data has had tremendous impact on the automatic reconstruction of neurons and synapses from 3D-EM. In chapter 3, I present a novel artificial neural network architecture that predicts neuron boundaries at quasi-isotropic resolution from non-isotropic 3D-EM. The goal is to create a high-quality over- segmentation with large three-dimensional fragments for faster manual proof- reading. In chapter 4, I present software libraries and tools that I developed to support the processing, visualization, and analysis of large 3D-EM data and connectome reconstructions. Using this software, we generated the largest currently existing training and test data for connectome reconstruction from non-isotropic 3D-EM. I will particularly emphasize my flexible interactive proof-reading tool Paintera that I built on top of the libraries and tools that I have developed over the last four years

    Correlative light and electron microscopy: new strategies for improved throughput and targeting precision

    Get PDF
    The need for quantitative analysis is crucial when studying fundamental mechanisms in cell biology. Common assays consist of interfering with a system via protein knockdowns or drug treatments. These very often lead to important response variability that is generally addressed by analyzing large populations. Whilst the imaging throughput in light microscopy (LM) is high enough for such large screens, electron microscopy (EM) still lags behind and is not adapted to collect large amounts of data from highly heterogeneous cell populations. Nevertheless, EM is the only technique that offers high-resolution imaging of the entire subcellular context. Correlative light and electron microscopy (CLEM) has made it possible to look at rare events or addressing heterogeneous populations. Our goal is to develop new strategies in CLEM. More specifically, we aim at automatizing the processes of screening large cell populations (living cells or pre-fixed), identifying the sub-populations of interest by LM, targeting these by EM and measuring the key components of the subcellular organization. New 3D-EM techniques like focused ion beam - scanning electron microscopy (FIB-SEM) enable a high degree of automation for the acquisition of high-resolution, full cell datasets. So far, this has only been applied to individual target volumes, often isotropic and has not been designed to acquire multiple regions of interest. The ability to acquire full cells with up to 5 nm x 5 nm x 5 nm voxel size (x, y referring to pixel size, z referring to slice thickness), leads to the accumulation of large datasets. Their analysis involves tedious manual segmentation or so far not well established automated segmentation algorithms. To enable the analysis and quantification of an extensive amount of data, we decided to explore the potential of stereology protocols in combination with automated acquisition in the FIB-SEM. Instead of isotropic datasets, a few evenly spaced sections are used to quantify subcellular structures. Our strategy therefore combines CLEM, 3D-EM and stereology to collect and analyze large amounts of cells selected based on their phenotype as visible by fluorescence microscopy. We demonstrate the power of the approach in a systematic screen of the Golgi apparatus morphology upon alteration of the expression of 10 proteins, plus negative and positive control. In parallel to this core project, we demonstrate the power of combining correlative approaches with 3D-EM for the detailed structural analysis of fundamental cell biology events during cell division and also for the understanding on complex physiological transitions in a multicellular model organism
    corecore