647 research outputs found

    Gebiss: an ImageJ plugin for the specification of ground truth and the performance evaluation of 3D segmentation algorithms.

    Get PDF
    Background: Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results: We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions: We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss

    Nucleus segmentation : towards automated solutions

    Get PDF
    Single nucleus segmentation is a frequent challenge of microscopy image processing, since it is the first step of many quantitative data analysis pipelines. The quality of tracking single cells, extracting features or classifying cellular phenotypes strongly depends on segmentation accuracy. Worldwide competitions have been held, aiming to improve segmentation, and recent years have definitely brought significant improvements: large annotated datasets are now freely available, several 2D segmentation strategies have been extended to 3D, and deep learning approaches have increased accuracy. However, even today, no generally accepted solution and benchmarking platform exist. We review the most recent single-cell segmentation tools, and provide an interactive method browser to select the most appropriate solution.Peer reviewe

    Towards accurate and efficient live cell imaging data analysis

    Get PDF
    Dynamische zellulĂ€re Prozesse wie Zellzyklus, Signaltransduktion oder Transkription zu analysieren wird Live-cell-imaging mittels Zeitraffermikroskopie verwendet. Um nun aber ZellabstammungsbĂ€ume aus einem Zeitraffervideo zu extrahieren, mĂŒssen die Zellen segmentiert und verfolgt werden können. Besonders hier, wo lebende Zellen ĂŒber einen langen Zeitraum betrachtet werden, sind Fehler in der Analyse fatal: Selbst eine extrem niedrige Fehlerrate kann sich amplifizieren, wenn viele Zeitpunkte aufgenommen werden, und damit den gesamten Datensatz unbrauchbar machen. In dieser Arbeit verwenden wir einen einfachen aber praktischen Ansatz, der die VorzĂŒge der manuellen und automatischen AnsĂ€tze kombiniert. Das von uns entwickelte Live-cell-Imaging Datenanalysetool ‘eDetect’ ergĂ€nzt die automatische Zellsegmentierung und -verfolgung durch Nachbearbeitung. Das Besondere an dieser Arbeit ist, dass sie mehrere interaktive Datenvisualisierungsmodule verwendet, um den Benutzer zu fĂŒhren und zu unterstĂŒtzen. Dies erlaubt den gesamten manuellen Eingriffsprozess zu rational und effizient zu gestalten. Insbesondere werden zwei Streudiagramme und eine Heatmap verwendet, um die Merkmale einzelner Zellen interaktiv zu visualisieren. Die Streudiagramme positionieren Ă€hnliche Objekte in unmittelbarer NĂ€he. So kann eine große Gruppe Ă€hnlicher Fehler mit wenigen Mausklicks erkannt und korrigiert werden, und damit die manuellen Eingriffe auf ein Minimum reduziert werden. Die Heatmap ist darauf ausgerichtet, alle ĂŒbersehenen Fehler aufzudecken und den Benutzern dabei zu helfen, bei der Zellabstammungsrekonstruktion schrittweise die perfekte Genauigkeit zu erreichen. Die quantitative Auswertung zeigt, dass eDetect die Genauigkeit der Nachverfolgung innerhalb eines akzeptablen Zeitfensters erheblich verbessern kann. Beurteilt nach biologisch relevanten Metriken, ĂŒbertrifft die Leistung von eDetect die derer Tools, die den Wettbewerb ‘Cell Tracking Challenge’ gewonnen haben.Live cell imaging based on time-lapse microscopy has been used to study dynamic cellular behaviors, such as cell cycle, cell signaling and transcription. Extracting cell lineage trees out of a time-lapse video requires cell segmentation and cell tracking. For long term live cell imaging, data analysis errors are particularly fatal. Even an extremely low error rate could potentially be amplified by the large number of sampled time points and render the entire video useless. In this work, we adopt a straightforward but practical design that combines the merits of manual and automatic approaches. We present a live cell imaging data analysis tool `eDetect', which uses post-editing to complement automatic segmentation and tracking. What makes this work special is that eDetect employs multiple interactive data visualization modules to guide and assist users, making the error detection and correction procedure rational and efficient. Specifically, two scatter plots and a heat map are used to interactively visualize single cells' visual features. The scatter plots position similar results in close vicinity, making it easy to spot and correct a large group of similar errors with a few mouse clicks, minimizing repetitive human interventions. The heat map is aimed at exposing all overlooked errors and helping users progressively approach perfect accuracy in cell lineage reconstruction. Quantitative evaluation proves that eDetect is able to largely improve accuracy within an acceptable time frame, and its performance surpasses the winners of most tasks in the `Cell Tracking Challenge', as measured by biologically relevant metrics

    Automated Deep Lineage Tree Analysis Using a Bayesian Single Cell Tracking Approach

    Get PDF
    Single-cell methods are beginning to reveal the intrinsic heterogeneity in cell populations, arising from the interplay of deterministic and stochastic processes. However, it remains challenging to quantify single-cell behaviour from time-lapse microscopy data, owing to the difficulty of extracting reliable cell trajectories and lineage information over long time-scales and across several generations. Therefore, we developed a hybrid deep learning and Bayesian cell tracking approach to reconstruct lineage trees from live-cell microscopy data. We implemented a residual U-Net model coupled with a classification CNN to allow accurate instance segmentation of the cell nuclei. To track the cells over time and through cell divisions, we developed a Bayesian cell tracking methodology that uses input features from the images to enable the retrieval of multi-generational lineage information from a corpus of thousands of hours of live-cell imaging data. Using our approach, we extracted 20,000 + fully annotated single-cell trajectories from over 3,500 h of video footage, organised into multi-generational lineage trees spanning up to eight generations and fourth cousin distances. Benchmarking tests, including lineage tree reconstruction assessments, demonstrate that our approach yields high-fidelity results with our data, with minimal requirement for manual curation. To demonstrate the robustness of our minimally supervised cell tracking methodology, we retrieve cell cycle durations and their extended inter- and intra-generational family relationships in 5,000 + fully annotated cell lineages. We observe vanishing cycle duration correlations across ancestral relatives, yet reveal correlated cyclings between cells sharing the same generation in extended lineages. These findings expand the depth and breadth of investigated cell lineage relationships in approximately two orders of magnitude more data than in previous studies of cell cycle heritability, which were reliant on semi-manual lineage data analysis

    Object Segmentation and Ground Truth in 3D Embryonic Imaging

    Get PDF
    Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets

    Superpixel-based segmentation of muscle fibers in multi-channel microscopy

    Get PDF
    Background Confetti fluorescence and other multi-color genetic labelling strategies are useful for observing stem cell regeneration and for other problems of cell lineage tracing. One difficulty of such strategies is segmenting the cell boundaries, which is a very different problem from segmenting color images from the real world. This paper addresses the difficulties and presents a superpixel-based framework for segmentation of regenerated muscle fibers in mice. Results We propose to integrate an edge detector into a superpixel algorithm and customize the method for multi-channel images. The enhanced superpixel method outperforms the original and another advanced superpixel algorithm in terms of both boundary recall and under-segmentation error. Our framework was applied to cross-section and lateral section images of regenerated muscle fibers from confetti-fluorescent mice. Compared with “ground-truth” segmentations, our framework yielded median Dice similarity coefficients of 0.92 and higher. Conclusion Our segmentation framework is flexible and provides very good segmentations of multi-color muscle fibers. We anticipate our methods will be useful for segmenting a variety of tissues in confetti fluorecent mice and in mice with similar multi-color labels.National University of Singapore (Duke-NUS SRP Phase 2 Research Block Grant)Singapore. National Research Foundation (CREATE programme)Singapore-MIT Alliance for Research and Technology (SMART

    Cell Segmentation and Tracking using CNN-Based Distance Predictions and a Graph-Based Matching Strategy

    Get PDF
    The accurate segmentation and tracking of cells in microscopy image sequences is an important task in biomedical research, e.g., for studying the development of tissues, organs or entire organisms. However, the segmentation of touching cells in images with a low signal-to-noise-ratio is still a challenging problem. In this paper, we present a method for the segmentation of touching cells in microscopy images. By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process. Furthermore, this representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types. For the prediction of the proposed neighbor distances, an adapted U-Net convolutional neural network (CNN) with two decoder paths is used. In addition, we adapt a graph-based cell tracking algorithm to evaluate our proposed method on the task of cell tracking. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Our combined tracking by detection method has proven its potential in the IEEE ISBI 2020 Cell Tracking Challenge (http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE multiple top three rankings including two top performances using a single segmentation model for the diverse data sets.Comment: 25 pages, 14 figures, methods of the team KIT-Sch-GE for the IEEE ISBI 2020 Cell Tracking Challeng

    Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning

    Get PDF
    Live-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org

    A biosegmentation benchmark for evaluation of bioimage analysis methods

    Get PDF
    Background: We present a biosegmentation benchmark that includes infrastructure, datasets with associated ground truth, and validation methods for biological image analysis. The primary motivation for creating this resource comes from the fact that it is very difficult, if not impossible, for an end-user to choose from a wide range of segmentation methods available in the literature for a particular bioimaging problem. No single algorithm is likely to be equally effective on diverse set of images and each method has its own strengths and limitations. We hope that our benchmark resource would be of considerable help to both the bioimaging researchers looking for novel image processing methods and image processing researchers exploring application of their methods to biology. Results: Our benchmark consists of different classes of images and ground truth data, ranging in scale from subcellular, cellular to tissue level, each of which pose their own set of challenges to image analysis. The associated ground truth data can be used to evaluate the effectiveness of different methods, to improve methods and to compare results. Standard evaluation methods and some analysis tools are integrated into a database framework that is available online at http://bioimage.ucsb.edu/biosegmentation/ webcite. Conclusion: This online benchmark will facilitate integration and comparison of image analysis methods for bioimages. While the primary focus is on biological images, we believe that the dataset and infrastructure will be of interest to researchers and developers working with biological image analysis, image segmentation and object tracking in general

    SpaceTx: A Roadmap for Benchmarking Spatial Transcriptomics Exploration of the Brain

    Full text link
    Mapping spatial distributions of transcriptomic cell types is essential to understanding the brain, with its exceptional cellular heterogeneity and the functional significance of its spatial organization. Spatial transcriptomics techniques are hoped to accomplish these measurements, but each method uses different experimental and computational protocols, with different trade-offs and optimizations. In 2017, the SpaceTx Consortium was formed to compare these methods and determine their suitability for large-scale spatial transcriptomic atlases. SpaceTx work included progress in tissue processing, taxonomy development, gene selection, image processing and data standardization, cell segmentation, cell type assignments, and visualization. Although the landscape of experimental methods has changed dramatically since the beginning of SpaceTx, the need for quantitative and detailed benchmarking of spatial transcriptomics methods in the brain is still unmet. Here, we summarize the work of SpaceTx and highlight outstanding challenges as spatial transcriptomics grows into a mature field. We also discuss how our progress provides a roadmap for benchmarking spatial transcriptomics methods in the future. Data and analyses from this consortium, along with code and methods are publicly available at https://spacetx.github.io/
    • 

    corecore