7 research outputs found

    A workflow for the automatic segmentation of organelles in electron microscopy image stacks.

    Get PDF
    Electron microscopy (EM) facilitates analysis of the form, distribution, and functional status of key organelle systems in various pathological processes, including those associated with neurodegenerative disease. Such EM data often provide important new insights into the underlying disease mechanisms. The development of more accurate and efficient methods to quantify changes in subcellular microanatomy has already proven key to understanding the pathogenesis of Parkinson's and Alzheimer's diseases, as well as glaucoma. While our ability to acquire large volumes of 3D EM data is progressing rapidly, more advanced analysis tools are needed to assist in measuring precise three-dimensional morphologies of organelles within data sets that can include hundreds to thousands of whole cells. Although new imaging instrument throughputs can exceed teravoxels of data per day, image segmentation and analysis remain significant bottlenecks to achieving quantitative descriptions of whole cell structural organellomes. Here, we present a novel method for the automatic segmentation of organelles in 3D EM image stacks. Segmentations are generated using only 2D image information, making the method suitable for anisotropic imaging techniques such as serial block-face scanning electron microscopy (SBEM). Additionally, no assumptions about 3D organelle morphology are made, ensuring the method can be easily expanded to any number of structurally and functionally diverse organelles. Following the presentation of our algorithm, we validate its performance by assessing the segmentation accuracy of different organelle targets in an example SBEM dataset and demonstrate that it can be efficiently parallelized on supercomputing resources, resulting in a dramatic reduction in runtime

    Iterative multi-path tracking for video and volume segmentation with sparse point supervision

    Get PDF
    Recent machine learning strategies for segmentation tasks have shown great ability when trained on large pixel-wise annotated image datasets. It remains a major challenge however to aggregate such datasets, as the time and monetary cost associated with collecting extensive annotations is extremely high. This is particularly the case for generating precise pixel-wise annotations in video and volumetric image data. To this end, this work presents a novel framework to produce pixel-wise segmentations using minimal supervision. Our method relies on 2D point supervision, whereby a single 2D location within an object of interest is provided on each image of the data. Our method then estimates the object appearance in a semi-supervised fashion by learning object-image-specific features and by using these in a semi-supervised learning framework. Our object model is then used in a graph-based optimization problem that takes into account all provided locations and the image data in order to infer the complete pixel-wise segmentation. In practice, we solve this optimally as a tracking problem using a K-shortest path approach. Both the object model and segmentation are then refined iteratively to further improve the final segmentation. We show that by collecting 2D locations using a gaze tracker, our approach can provide state-of-the-art segmentations on a range of objects and image modalities (video and 3D volumes), and that these can then be used to train supervised machine learning classifiers

    Doctor of Philosophy

    Get PDF
    dissertationScene labeling is the problem of assigning an object label to each pixel of a given image. It is the primary step towards image understanding and unifies object recognition and image segmentation in a single framework. A perfect scene labeling framework detects and densely labels every region and every object that exists in an image. This task is of substantial importance in a wide range of applications in computer vision. Contextual information plays an important role in scene labeling frameworks. A contextual model utilizes the relationships among the objects in a scene to facilitate object detection and image segmentation. Using contextual information in an effective way is one of the main questions that should be answered in any scene labeling framework. In this dissertation, we develop two scene labeling frameworks that rely heavily on contextual information to improve the performance over state-of-the-art methods. The first model, called the multiclass multiscale contextual model (MCMS), uses contextual information from multiple objects and at different scales for learning discriminative models in a supervised setting. The MCMS model incorporates crossobject and interobject information into one probabilistic framework, and thus is able to capture geometrical relationships and dependencies among multiple objects in addition to local information from each single object present in an image. The second model, called the contextual hierarchical model (CHM), learns contextual information in a hierarchy for scene labeling. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. The CHM then incorporates the resulting multiresolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. We demonstrate the performance of CHM on different challenging tasks such as outdoor scene labeling and edge detection in natural images and membrane detection in electron microscopy images. We also introduce two novel classification methods. WNS-AdaBoost speeds up the training of AdaBoost by providing a compact representation of a training set. Disjunctive normal random forest (DNRF) is an ensemble method that is able to learn complex decision boundaries and achieves low generalization error by optimizing a single objective function for each weak classifier in the ensemble. Finally, a segmentation framework is introduced that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy images

    Segmentation of mitochondria in electron microscopy images using algebraic curves

    No full text

    A model-based method for 3D reconstruction of cerebellar parallel fibres from high-resolution electron microscope images

    Get PDF
    In order to understand how the brain works, we need to understand how its neural circuits process information. Electron microscopy remains the only imaging technique capable of providing sufficient resolution to reconstruct the dense connectivity between all neurons in a circuit. Automated electron microscopy techniques are approaching the point where usefully large circuits might be successfully imaged, but the development of automated reconstruction techniques lags far behind. No fully-automated reconstruction technique currently produces acceptably accurate reconstructions, and semi-automated approaches currently require an extreme amount of manual effort. This reconstruction bottleneck places severe limits on the size of neural circuits that can be reconstructed. Improved automated reconstruction techniques are therefore highly desired and under active development. The human brain contains ~86 billion neurons and ~80% of these are located in the cerebellum. Of these cerebellar neurons, the vast majority are granule cells. The axons of these granule cells are called parallel fibres and tend to be oriented in approximately the same direction, making 2+1D reconstruction approaches feasible. In this work we focus on the problem of reconstructing these parallel fibres and make four main contributions: (1) a model-based algorithm for reconstructing 2D parallel fibre cross-sections that achieves state of the art 2D reconstruction performance; (2) a fully-automated algorithm for reconstructing 3D parallel fibres that achieves state of the art 3D reconstruction performance; (3) a semi-automated approach for reconstructing 3D parallel fibres that significantly improves reconstruction accuracy compared to our fully-automated approach while requiring ~40 times less labelling effort than a purely manual reconstruction; (4) a "gold standard" ground truth data set for the molecular layer of the mouse cerebellum that will provide a valuable reference for the development and benchmarking of reconstruction algorithms
    corecore