24 research outputs found

    DeepCell 2.0: Automated cloud deployment of deep learning models for large-scale cellular image analysis

    Get PDF
    Deep learning is transforming the ability of life scientists to extract information from images. While these techniques have superior accuracy in comparison to conventional approaches and enable previously impossible analyses, their unique hardware and software requirements have prevented widespread adoption by life scientists. To meet this need, we have developed DeepCell 2.0, an open source library for training and delivering deep learning models with cloud computing. This library enables users to configure and manage a cloud deployment of DeepCell 2.0 on all commonly used operating systems. Using single-cell segmentation as a use case, we show that users with suitable training data can train models and analyze data with those models through a web interface. We demonstrate that by matching analysis tasks with their hardware requirements, we can efficiently use computational resources in the cloud and scale those resources to meet demand, significantly reducing the time necessary for large-scale image analysis. By reducing the barriers to entry, this work will empower life scientists to apply deep learning methods to their data. A persistent deployment is available at http://www.deepcell.org

    DeepCell Kiosk: scaling deep learning–enabled cellular image analysis with Kubernetes

    Get PDF
    Deep learning is transforming the analysis of biological images, but applying these models to large datasets remains challenging. Here we describe the DeepCell Kiosk, cloud-native software that dynamically scales deep learning workflows to accommodate large imaging datasets. To demonstrate the scalability and affordability of this software, we identified cell nuclei in 10⁶ 1-megapixel images in ~5.5 h for ~US250,withacostbelowUS250, with a cost below US100 achievable depending on cluster configuration. The DeepCell Kiosk can be downloaded at https://github.com/vanvalenlab/kiosk-console; a persistent deployment is available at https://deepcell.org/

    Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning

    Get PDF
    Live-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org

    DeepCell 2.0: Automated cloud deployment of deep learning models for large-scale cellular image analysis

    Get PDF
    Deep learning is transforming the ability of life scientists to extract information from images. While these techniques have superior accuracy in comparison to conventional approaches and enable previously impossible analyses, their unique hardware and software requirements have prevented widespread adoption by life scientists. To meet this need, we have developed DeepCell 2.0, an open source library for training and delivering deep learning models with cloud computing. This library enables users to configure and manage a cloud deployment of DeepCell 2.0 on all commonly used operating systems. Using single-cell segmentation as a use case, we show that users with suitable training data can train models and analyze data with those models through a web interface. We demonstrate that by matching analysis tasks with their hardware requirements, we can efficiently use computational resources in the cloud and scale those resources to meet demand, significantly reducing the time necessary for large-scale image analysis. By reducing the barriers to entry, this work will empower life scientists to apply deep learning methods to their data. A persistent deployment is available at http://www.deepcell.org

    DeepCell Kiosk: scaling deep learning–enabled cellular image analysis with Kubernetes

    Get PDF
    Deep learning is transforming the analysis of biological images, but applying these models to large datasets remains challenging. Here we describe the DeepCell Kiosk, cloud-native software that dynamically scales deep learning workflows to accommodate large imaging datasets. To demonstrate the scalability and affordability of this software, we identified cell nuclei in 10⁶ 1-megapixel images in ~5.5 h for ~US250,withacostbelowUS250, with a cost below US100 achievable depending on cluster configuration. The DeepCell Kiosk can be downloaded at https://github.com/vanvalenlab/kiosk-console; a persistent deployment is available at https://deepcell.org/

    Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning

    Get PDF
    Live-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org

    Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning

    Get PDF
    Understanding the spatial organization of tissues is of critical importance for both basic and translational research. While recent advances in tissue imaging are opening an exciting new window into the biology of human tissues, interpreting the data that they create is a significant computational challenge. Cell segmentation, the task of uniquely identifying each cell in an image, remains a substantial barrier for tissue imaging, as existing approaches are inaccurate or require a substantial amount of manual curation to yield useful results. Here, we addressed the problem of cell segmentation in tissue imaging data through large-scale data annotation and deep learning. We constructed TissueNet, an image dataset containing >1 million paired whole-cell and nuclear annotations for tissue images from nine organs and six imaging platforms. We created Mesmer, a deep learning-enabled segmentation algorithm trained on TissueNet that performs nuclear and whole-cell segmentation in tissue imaging data. We demonstrated that Mesmer has better speed and accuracy than previous methods, generalizes to the full diversity of tissue types and imaging platforms in TissueNet, and achieves human-level performance for whole-cell segmentation. Mesmer enabled the automated extraction of key cellular features, such as subcellular localization of protein signal, which was challenging with previous approaches. We further showed that Mesmer could be adapted to harness cell lineage information present in highly multiplexed datasets. We used this enhanced version to quantify cell morphology changes during human gestation. All underlying code and models are released with permissive licenses as a community resource

    Game-Engine-Assisted Research platform for Scientific computing (GEARS) in Virtual Reality

    No full text
    The Game-Engine-Assisted Research platform for Scientific computing (GEARS) is a visualization framework developed at the Materials Genome Innovation for Computational Software (MAGICS) center to perform simulations and on-the-fly data exploration in virtual reality (VR) environments. This hardware-agnostic framework accommodates multiple programming languages and game engines in addition to supporting integration with a widely-used materials simulation engine called LAMMPS. GEARS also features a novel data exploration tool called virtual confocal microscopy, which endows scientific visualization with enhanced functionality. Keywords: Virtual reality, Real-time simulation visualization, Molecular dynamics, LAMMP
    corecore