631 research outputs found

    DeepCell Kiosk: scaling deep learning–enabled cellular image analysis with Kubernetes

    Get PDF
    Deep learning is transforming the analysis of biological images, but applying these models to large datasets remains challenging. Here we describe the DeepCell Kiosk, cloud-native software that dynamically scales deep learning workflows to accommodate large imaging datasets. To demonstrate the scalability and affordability of this software, we identified cell nuclei in 10⁶ 1-megapixel images in ~5.5 h for ~US250,withacostbelowUS250, with a cost below US100 achievable depending on cluster configuration. The DeepCell Kiosk can be downloaded at https://github.com/vanvalenlab/kiosk-console; a persistent deployment is available at https://deepcell.org/

    Optimizing Deep Neural Networks for Single Cell Segmentation

    Get PDF
    Analysis of live-cell imaging experiments at the resolution of single cells provides exciting insights into the inner workings of biological systems. Advances in biological imaging and computer vision allow for segmentation of natural images with a high degree of accuracy. However, automation of the segmentation pipeline at the single cell resolution remains a challenging task. Complex deep learning models require large, well-annotated datasets that are rarely available in biology. In this research, we explore various methods that optimize state of the art deep learning frameworks, despite limited resources. We trained a large permutation of models to quantify their capacity and to measure the effects of temporal information, spatial awareness and transfer learning on model performance. We find that, although training set size is most impactful in improving model accuracy, we can leverage techniques like spatial awareness and transfer learning to compromise for the lack of data. These insights show that, with an abundance of data, light-weight models can be as performant as their heavy-weight counterparts in cellular analysis

    Nucleus segmentation : towards automated solutions

    Get PDF
    Single nucleus segmentation is a frequent challenge of microscopy image processing, since it is the first step of many quantitative data analysis pipelines. The quality of tracking single cells, extracting features or classifying cellular phenotypes strongly depends on segmentation accuracy. Worldwide competitions have been held, aiming to improve segmentation, and recent years have definitely brought significant improvements: large annotated datasets are now freely available, several 2D segmentation strategies have been extended to 3D, and deep learning approaches have increased accuracy. However, even today, no generally accepted solution and benchmarking platform exist. We review the most recent single-cell segmentation tools, and provide an interactive method browser to select the most appropriate solution.Peer reviewe

    CLEMSite, a software for automated phenotypic screens using light microscopy and FIB-SEM

    Get PDF
    This work was supported by EMBL funds and by by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project number 240245660 – SFB 1129 (project Z2).In recent years, Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) has emerged as a flexible method that enables semi-automated volume ultrastructural imaging. We present a toolset for adherent cells that enables tracking and finding cells, previously identified in light microscopy (LM), in the FIB-SEM, along with the automatic acquisition of high-resolution volume datasets. We detect the underlying grid pattern in both modalities (LM and EM), to identify common reference points. A combination of computer vision techniques enables complete automation of the workflow. This includes setting the coincidence point of both ion and electron beams, automated evaluation of the image quality and constantly tracking the sample position with the microscope’s field of view reducing or even eliminating operator supervision. We show the ability to target the regions of interest in EM within 5 µm accuracy while iterating between different targets and implementing unattended data acquisition. Our results demonstrate that executing volume acquisition in multiple locations autonomously is possible in EM.Publisher PDFPeer reviewe

    AI-powered transmitted light microscopy for functional analysis of live cells

    Get PDF
    Transmitted light microscopy can readily visualize the morphology of living cells. Here, we introduce artificial-intelligence-powered transmitted light microscopy (AIM) for subcellular structure identification and labeling-free functional analysis of live cells. AIM provides accurate images of subcellular organelles; allows identification of cellular and functional characteristics (cell type, viability, and maturation stage); and facilitates live cell tracking and multimodality analysis of immune cells in their native form without labeling

    Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning

    Get PDF
    Live-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org
    corecore