75 research outputs found

    Magnetic Ratcheting Cytometry Towards Manafacturing Scale Separations Of “Best In Class” Cart-T Cells

    Get PDF
    Adoptive cell therapies taking advantage of engineered Chimeric Antigen Receptors (CAR) or T-Cell Receptors (TCR) have shown incredible potential as “living drugs” that achieve personalized immunotherapies for cancer patients. However, variations in T cell transduction efficiency during genetic modification can lead to widely varied levels of expression[1] (~2-orders of magnitude) which can possibly dilute therapeutic effectiveness and potentially contribute to off-tumor toxicity[2]. While research has shown that isolation of cell sub-populations with tightly controlled expression could lead to improved therapies[3], limitations of current cell separation technologies prevent implementation at manufacturing scale workflows. Quantitative separation techniques (e.g. fluorescence assisted cell separation-FACS) do not scale for production of therapeutic doses, and magnetic assisted cell separation (MACS) techniques do not allow precise selection of cell sub-populations based on surface expression. Because of these limitations, enrichment of “best in class” CAR-T/TCR sub-populations at manufacturing scale throughputs remains impractical and non-economical. [1] Chang ZL, Silver PA, Chen YY. Identification and selective expansion of functionally superior T cells expressing chimeric antigen receptors. J Transl Med. 2015;13:161. doi:10.1186/s12967-015-0519-8. [2] Carels N, SpinassĂ© LB, Tilli TM, Tuszynski JA. Toward precision medicine of breast cancer. Theor Biol Med Model. 2016;13:7. doi:10.1186/s12976-016-0035-4. [3] Berger C, Jensen MC, Lansdorp PM, Gough M, Elliott C, Riddell SR. Adoptive transfer of effector CD8+ T cells derived from central memory cells establishes persistent T cell memory in primates. J Clin Investig 2008;118: 294–305. Please click Additional Files below to see the full abstract

    Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning

    Get PDF
    Live-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org

    A Foundation Model for Cell Segmentation

    Full text link
    Cells are the fundamental unit of biological organization, and identifying them in imaging data - cell segmentation - is a critical task for various cellular imaging experiments. While deep learning methods have led to substantial progress on this problem, models that have seen wide use are specialist models that work well for specific domains. Methods that have learned the general notion of "what is a cell" and can identify them across different domains of cellular imaging data have proven elusive. In this work, we present CellSAM, a foundation model for cell segmentation that generalizes across diverse cellular imaging data. CellSAM builds on top of the Segment Anything Model (SAM) by developing a prompt engineering approach to mask generation. We train an object detector, CellFinder, to automatically detect cells and prompt SAM to generate segmentations. We show that this approach allows a single model to achieve state-of-the-art performance for segmenting images of mammalian cells (in tissues and cell culture), yeast, and bacteria collected with various imaging modalities. To enable accessibility, we integrate CellSAM into DeepCell Label to further accelerate human-in-the-loop labeling strategies for cellular imaging data. A deployed version of CellSAM is available at https://label-dev.deepcell.org/

    DeepCell Kiosk: scaling deep learning–enabled cellular image analysis with Kubernetes

    Get PDF
    Deep learning is transforming the analysis of biological images, but applying these models to large datasets remains challenging. Here we describe the DeepCell Kiosk, cloud-native software that dynamically scales deep learning workflows to accommodate large imaging datasets. To demonstrate the scalability and affordability of this software, we identified cell nuclei in 10⁶ 1-megapixel images in ~5.5 h for ~US250,withacostbelowUS250, with a cost below US100 achievable depending on cluster configuration. The DeepCell Kiosk can be downloaded at https://github.com/vanvalenlab/kiosk-console; a persistent deployment is available at https://deepcell.org/

    Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning

    Get PDF
    Live-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org
    • 

    corecore