829 research outputs found

    Gebiss: an ImageJ plugin for the specification of ground truth and the performance evaluation of 3D segmentation algorithms.

    Get PDF
    Background: Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results: We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions: We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss

    New tools for quantitative analysis of nuclear architecture

    No full text
    The cell nucleus houses a wide variety of macromolecular substructures including the cell’s genetic material. The spatial configuration of these substructures is thought to be fundamentally associated with nuclear function, yet the architectural organisation of the cell nucleus is only poorly understood. Advances in microscopy and associated fluorescence techniques have provided a wealth of nuclear image data. Such images offer the opportunity for both visualising nuclear substructures and quantitative investigation of the spatial configuration of these objects. In this thesis, we present new tools to study and explore the subtle principles behind nuclear architecture. We describe a novel method to segment fluorescent microscopy images of nuclear objects. The effectiveness of this segmentation algorithm is demonstrated using extensive simulation. Additionally, we show that the method performs as well as manual-thresholding, which is considered the gold standard. Next, randomisationbased tests from spatial point pattern analysis are employed to inspect spatial interactions of nuclear substructures. The results suggest new and interesting spatial relationships in the nucleus. However, this approach probes only relative nuclear organisation and cannot readily yield a description of absolute spatial preference, which may be a key component of nuclear architecture. To address this problem we have developed methodology based on techniques employed in statistical shape analysis and image registration. The approach proposes that the nuclear boundary can be used to align nuclei from replicate images into a common coordinate system. Each nucleus and its contents can therefore be registered to the sample mean shape using rigid and non-rigid deformations. This aggregated data allows inference regarding global nuclear spatial organisation. For example, the kernel smoothed intensity function is computed to return an estimate of the intensity function of the registered nuclear object. Simulation provides evidence that the registration procedure is sensible and the results accurate. Finally, we have investigated a large database of nuclear substructures using conventional methodology as well as our new tools. We have identified novel spatial relationships between nuclear objects that offer significant clues to their function. We have also examined the absolute spatial configuration of these substructures in registered data. The results reveal dramatic underlying spatial preferences and present new and clear insights into nuclear architecture

    Accurate 3D Cell Segmentation using Deep Feature and CRF Refinement

    Full text link
    We consider the problem of accurately identifying cell boundaries and labeling individual cells in confocal microscopy images, specifically, 3D image stacks of cells with tagged cell membranes. Precise identification of cell boundaries, their shapes, and quantifying inter-cellular space leads to a better understanding of cell morphogenesis. Towards this, we outline a cell segmentation method that uses a deep neural network architecture to extract a confidence map of cell boundaries, followed by a 3D watershed algorithm and a final refinement using a conditional random field. In addition to improving the accuracy of segmentation compared to other state-of-the-art methods, the proposed approach also generalizes well to different datasets without the need to retrain the network for each dataset. Detailed experimental results are provided, and the source code is available on GitHub.Comment: 5 pages, 5 figures, 3 table

    Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks

    Full text link
    Fluorescence microscopy images usually show severe anisotropy in axial versus lateral resolution. This hampers downstream processing, i.e. the automatic extraction of quantitative biological data. While deconvolution methods and other techniques to address this problem exist, they are either time consuming to apply or limited in their ability to remove anisotropy. We propose a method to recover isotropic resolution from readily acquired anisotropic data. We achieve this using a convolutional neural network that is trained end-to-end from the same anisotropic body of data we later apply the network to. The network effectively learns to restore the full isotropic resolution by restoring the image under a trained, sample specific image prior. We apply our method to 33 synthetic and 33 real datasets and show that our results improve on results from deconvolution and state-of-the-art super-resolution techniques. Finally, we demonstrate that a standard 3D segmentation pipeline performs on the output of our network with comparable accuracy as on the full isotropic data

    Accurate and versatile 3D segmentation of plant tissues at cellular resolution

    Get PDF
    Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface

    3D segmentations of neuronal nuclei from confocal microscope image stacks

    Get PDF
    In this paper, we present an algorithm to create 3D segmentations of neuronal cells from stacks of previously segmented 2D images. The idea behind this proposal is to provide a general method to reconstruct 3D structures from 2D stacks, regardless of how these 2D stacks have been obtained. The algorithm not only reuses the information obtained in the 2D segmentation, but also attempts to correct some typical mistakes made by the 2D segmentation algorithms (for example, under segmentation of tightly-coupled clusters of cells). We have tested our algorithm in a real scenario?the segmentation of the neuronal nuclei in different layers of the rat cerebral cortex. Several representative images from different layers of the cerebral cortex have been considered and several 2D segmentation algorithms have been compared. Furthermore, the algorithm has also been compared with the traditional 3D Watershed algorithm and the results obtained here show better performance in terms of correctly identified neuronal nuclei

    Three-Dimensional GPU-Accelerated Active Contours for Automated Localization of Cells in Large Images

    Full text link
    Cell segmentation in microscopy is a challenging problem, since cells are often asymmetric and densely packed. This becomes particularly challenging for extremely large images, since manual intervention and processing time can make segmentation intractable. In this paper, we present an efficient and highly parallel formulation for symmetric three-dimensional (3D) contour evolution that extends previous work on fast two-dimensional active contours. We provide a formulation for optimization on 3D images, as well as a strategy for accelerating computation on consumer graphics hardware. The proposed software takes advantage of Monte-Carlo sampling schemes in order to speed up convergence and reduce thread divergence. Experimental results show that this method provides superior performance for large 2D and 3D cell segmentation tasks when compared to existing methods on large 3D brain images

    DeadEasy Mito-Glia: Automatic Counting of Mitotic Cells and Glial Cells in Drosophila

    Get PDF
    Cell number changes during normal development, and in disease (e.g., neurodegeneration, cancer). Many genes affect cell number, thus functional genetic analysis frequently requires analysis of cell number alterations upon loss of function mutations or in gain of function experiments. Drosophila is a most powerful model organism to investigate the function of genes involved in development or disease in vivo. Image processing and pattern recognition techniques can be used to extract information from microscopy images to quantify automatically distinct cellular features, but these methods are still not very extended in this model organism. Thus cellular quantification is often carried out manually, which is laborious, tedious, error prone or humanly unfeasible. Here, we present DeadEasy Mito-Glia, an image processing method to count automatically the number of mitotic cells labelled with anti-phospho-histone H3 and of glial cells labelled with anti-Repo in Drosophila embryos. This programme belongs to the DeadEasy suite of which we have previously developed versions to count apoptotic cells and neuronal nuclei. Having separate programmes is paramount for accuracy. DeadEasy Mito-Glia is very easy to use, fast, objective and very accurate when counting dividing cells and glial cells labelled with a nuclear marker. Although this method has been validated for Drosophila embryos, we provide an interactive window for biologists to easily extend its application to other nuclear markers and other sample types. DeadEasy MitoGlia is freely available as an ImageJ plug-in, it increases the repertoire of tools for in vivo genetic analysis, and it will be of interest to a broad community of developmental, cancer and neuro-biologists
    corecore