11 research outputs found

    Cell Segmentation in 3D Confocal Images using Supervoxel Merge-Forests with CNN-based Hypothesis Selection

    Full text link
    Automated segmentation approaches are crucial to quantitatively analyze large-scale 3D microscopy images. Particularly in deep tissue regions, automatic methods still fail to provide error-free segmentations. To improve the segmentation quality throughout imaged samples, we present a new supervoxel-based 3D segmentation approach that outperforms current methods and reduces the manual correction effort. The algorithm consists of gentle preprocessing and a conservative super-voxel generation method followed by supervoxel agglomeration based on local signal properties and a postprocessing step to fix under-segmentation errors using a Convolutional Neural Network. We validate the functionality of the algorithm on manually labeled 3D confocal images of the plant Arabidopis thaliana and compare the results to a state-of-the-art meristem segmentation algorithm.Comment: 5 pages, 3 figures, 1 tabl

    Accurate 3D Cell Segmentation using Deep Feature and CRF Refinement

    Full text link
    We consider the problem of accurately identifying cell boundaries and labeling individual cells in confocal microscopy images, specifically, 3D image stacks of cells with tagged cell membranes. Precise identification of cell boundaries, their shapes, and quantifying inter-cellular space leads to a better understanding of cell morphogenesis. Towards this, we outline a cell segmentation method that uses a deep neural network architecture to extract a confidence map of cell boundaries, followed by a 3D watershed algorithm and a final refinement using a conditional random field. In addition to improving the accuracy of segmentation compared to other state-of-the-art methods, the proposed approach also generalizes well to different datasets without the need to retrain the network for each dataset. Detailed experimental results are provided, and the source code is available on GitHub.Comment: 5 pages, 5 figures, 3 table

    SEGMENT3D: A Web-based Application for Collaborative Segmentation of 3D images used in the Shoot Apical Meristem

    Full text link
    The quantitative analysis of 3D confocal microscopy images of the shoot apical meristem helps understanding the growth process of some plants. Cell segmentation in these images is crucial for computational plant analysis and many automated methods have been proposed. However, variations in signal intensity across the image mitigate the effectiveness of those approaches with no easy way for user correction. We propose a web-based collaborative 3D image segmentation application, SEGMENT3D, to leverage automatic segmentation results. The image is divided into 3D tiles that can be either segmented interactively from scratch or corrected from a pre-existing segmentation. Individual segmentation results per tile are then automatically merged via consensus analysis and then stitched to complete the segmentation for the entire image stack. SEGMENT3D is a comprehensive application that can be applied to other 3D imaging modalities and general objects. It also provides an easy way to create supervised data to advance segmentation using machine learning models

    Mr-Nom: Multi-Scale Resolution of Neuronal Cells in Nissl-Stained Histological Slices Via Deliberate over-Segmentation and Merging

    Get PDF
    In comparative neuroanatomy, the characterization of brain cytoarchitecture is critical to a better understanding of brain structure and function, as it helps to distill information on the development, evolution, and distinctive features of different populations. The automatic segmentation of individual brain cells is a primary prerequisite and yet remains challenging. A new method (MR-NOM) was developed for the instance segmentation of cells in Nissl-stained histological images of the brain. MR-NOM exploits a multi-scale approach to deliberately over-segment the cells into superpixels and subsequently merge them via a classifier based on shape, structure, and intensity features. The method was tested on images of the cerebral cortex, proving successful in dealing with cells of varying characteristics that partially touch or overlap, showing better performance than two state-of-the-art methods

    Deep active learning for suggestive segmentation of biomedical image stacks via optimisation of Dice scores and traced boundary length

    Get PDF
    Manual segmentation of stacks of 2D biomedical images (e.g., histology) is a time-consuming task which can be sped up with semi-automated techniques. In this article, we present a suggestive deep active learning framework that seeks to minimise the annotation effort required to achieve a certain level of accuracy when labelling such a stack. The framework suggests, at every iteration, a specific region of interest (ROI) in one of the images for manual delineation. Using a deep segmentation neural network and a mixed cross-entropy loss function, we propose a principled strategy to estimate class probabilities for the whole stack, conditioned on heterogeneous partial segmentations of the 2D images, as well as on weak supervision in the form of image indices that bound each ROI. Using the estimated probabilities, we propose a novel active learning criterion based on predictions for the estimated segmentation performance and delineation effort, measured with average Dice scores and total delineated boundary length, respectively, rather than common surrogates such as entropy. The query strategy suggests the ROI that is expected to maximise the ratio between performance and effort, while considering the adjacency of structures that may have already been labelled – which decrease the length of the boundary to trace. We provide quantitative results on synthetically deformed MRI scans and real histological data, showing that our framework can reduce labelling effort by up to 60–70% without compromising accuracy

    Co-ordination of cell shape changes during ventral furrow formation in Drosophila embryo

    Get PDF
    The formation of the ventral furrow in the Drosophila embryo has served as one of the major paradigms for how large-scale morphogenetic events are initiated, controlled and mediated by cellular behavior. The furrow is formed by the inward folding of the mesoderm epithelium on the ventral side of the early embryo. While it is well established that the onset of gastrulation is initiated by the apical constriction of the central mesoderm cells (CM), a subpopulation about 8-10 rows wide, it has recently become clear that furrow internalization can only be completed with the cooperation of the lateral mesodermal (LM) cells, a subpopulation about 3-4 rows wide on each side of the mesoderm that, instead of constricting, expand their apical areas at the same time. In this thesis we have developed a method to reconstruct 3D cell volumes in the entire embryo to study the coordination of cells shape changes during ventral furrow formation. We find that the cell shape changes in LM cells are passive and depend on the forces generated during apical constriction in the CM cells. A twist induced gradient of molecular cascade leading to apical MyosinII recruitment in the mesoderm results into a ‘tug-ofwar’ between the adjacent cells. Due to high amount of apical MyosinII recruitment, the CM cells constrict stronger and causes the LM cells to expand apically

    Segmentation of meristem cells by an automated opinion algorithm

    Get PDF
    Meristem cells are irregularly shaped and appear in confocal images as dark areas surrounded by bright ones. Images are characterized by regions of very low contrast and absolute loss of edges deeper into the meristem. Edges are blurred, discontinuous, sometimes indistinguishable, and the intensity level inside the cells is similar to the background of the image. Recently, a technique called Parametric Segmentation Tuning was introduced for the optimization of segmentation parameters in diatom images. This paper presents a PST-tuned automatic segmentation method of meristem cells in microscopy images based on mathematical morphology. The optimal parameters of the algorithm are found by means of an iterative process that compares the segmented images obtained by successive variations of the parameters. Then, an optimization function is used to determine which pair of successive images allows for the best segmentation. The technique was validated by comparing its results with those obtained by a level set algorithm and a balloon segmentation technique. The outcomes show that our methodology offers better results than two free available state-of-the-art alternatives, being superior in all cases studied, losing 9.09% of the cells in the worst situation, against 75.81 and 25.45 obtained in the level set and the balloon segmentation techniques, respectively. The optimization method can be employed to tune the parameters of other meristem segmentation methods

    Semi-automated learning strategies for large-scale segmentation of histology and other big bioimaging stacks and volumes

    Get PDF
    Labelled high-resolution datasets are becoming increasingly common and necessary in different areas of biomedical imaging. Examples include: serial histology and ex-vivo MRI for atlas building, OCT for studying the human brain, and micro X-ray for tissue engineering. Labelling such datasets, typically, requires manual delineation of a very detailed set of regions of interest on a large number of sections or slices. This process is tedious, time-consuming, not reproducible and rather inefficient due to the high similarity of adjacent sections. In this thesis, I explore the potential of a semi-automated slice level segmentation framework and a suggestive region level framework which aim to speed up the segmentation process of big bioimaging datasets. The thesis includes two well validated, published, and widely used novel methods and one algorithm which did not yield an improvement compared to the current state-of the-art. The slice-wise method, SmartInterpol, consists of a probabilistic model for semi-automated segmentation of stacks of 2D images, in which the user manually labels a sparse set of sections (e.g., one every n sections), and lets the algorithm complete the segmentation for other sections automatically. The proposed model integrates in a principled manner two families of segmentation techniques that have been very successful in brain imaging: multi-atlas segmentation and convolutional neural networks. Labelling every structure on a sparse set of slices is not necessarily optimal, therefore I also introduce a region level active learning framework which requires the labeller to annotate one region of interest on one slice at the time. The framework exploits partial annotations, weak supervision, and realistic estimates of class and section-specific annotation effort in order to greatly reduce the time it takes to produce accurate segmentations for large histological datasets. Although both frameworks have been created targeting histological datasets, they have been successfully applied to other big bioimaging datasets, reducing labelling effort by up to 60−70% without compromising accuracy
    corecore