75 research outputs found
Magnetic Ratcheting Cytometry Towards Manafacturing Scale Separations Of âBest In Classâ Cart-T Cells
Adoptive cell therapies taking advantage of engineered Chimeric Antigen Receptors (CAR) or T-Cell Receptors (TCR) have shown incredible potential as âliving drugsâ that achieve personalized immunotherapies for cancer patients. However, variations in T cell transduction efficiency during genetic modification can lead to widely varied levels of expression[1] (~2-orders of magnitude) which can possibly dilute therapeutic effectiveness and potentially contribute to off-tumor toxicity[2]. While research has shown that isolation of cell sub-populations with tightly controlled expression could lead to improved therapies[3], limitations of current cell separation technologies prevent implementation at manufacturing scale workflows. Quantitative separation techniques (e.g. fluorescence assisted cell separation-FACS) do not scale for production of therapeutic doses, and magnetic assisted cell separation (MACS) techniques do not allow precise selection of cell sub-populations based on surface expression. Because of these limitations, enrichment of âbest in classâ CAR-T/TCR sub-populations at manufacturing scale throughputs remains impractical and non-economical.
[1] Chang ZL, Silver PA, Chen YY. Identification and selective expansion of functionally superior T cells expressing chimeric antigen receptors. J Transl Med. 2015;13:161. doi:10.1186/s12967-015-0519-8.
[2] Carels N, Spinassé LB, Tilli TM, Tuszynski JA. Toward precision medicine of breast cancer. Theor Biol Med Model. 2016;13:7. doi:10.1186/s12976-016-0035-4.
[3] Berger C, Jensen MC, Lansdorp PM, Gough M, Elliott C, Riddell SR. Adoptive transfer of effector CD8+ T cells derived from central memory cells establishes persistent T cell memory in primates. J Clin Investig 2008;118: 294â305.
Please click Additional Files below to see the full abstract
Recommended from our members
Label-free isolation of prostate circulating tumor cells using Vortex microfluidic technology.
There has been increased interest in utilizing non-invasive "liquid biopsies" to identify biomarkers for cancer prognosis and monitoring, and to isolate genetic material that can predict response to targeted therapies. Circulating tumor cells (CTCs) have emerged as such a biomarker providing both genetic and phenotypic information about tumor evolution, potentially from both primary and metastatic sites. Currently, available CTC isolation approaches, including immunoaffinity and size-based filtration, have focused on high capture efficiency but with lower purity and often long and manual sample preparation, which limits the use of captured CTCs for downstream analyses. Here, we describe the use of the microfluidic Vortex Chip for size-based isolation of CTCs from 22 patients with advanced prostate cancer and, from an enumeration study on 18 of these patients, find that we can capture CTCs with high purity (from 1.74 to 37.59%) and efficiency (from 1.88 to 93.75 CTCs/7.5âmL) in less than 1âh. Interestingly, more atypical large circulating cells were identified in five age-matched healthy donors (46-77 years old; 1.25-2.50 CTCs/7.5âmL) than in five healthy donors <30 years old (21-27 years old; 0.00 CTC/7.5âmL). Using a threshold calculated from the five age-matched healthy donors (3.37 CTCs/mL), we identified CTCs in 80% of the prostate cancer patients. We also found that a fraction of the cells collected (11.5%) did not express epithelial prostate markers (cytokeratin and/or prostate-specific antigen) and that some instead expressed markers of epithelial-mesenchymal transition, i.e., vimentin and N-cadherin. We also show that the purity and DNA yield of isolated cells is amenable to targeted amplification and next-generation sequencing, without whole genome amplification, identifying unique mutations in 10 of 15 samples and 0 of 4 healthy samples
Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning
Live-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org
A Foundation Model for Cell Segmentation
Cells are the fundamental unit of biological organization, and identifying
them in imaging data - cell segmentation - is a critical task for various
cellular imaging experiments. While deep learning methods have led to
substantial progress on this problem, models that have seen wide use are
specialist models that work well for specific domains. Methods that have
learned the general notion of "what is a cell" and can identify them across
different domains of cellular imaging data have proven elusive. In this work,
we present CellSAM, a foundation model for cell segmentation that generalizes
across diverse cellular imaging data. CellSAM builds on top of the Segment
Anything Model (SAM) by developing a prompt engineering approach to mask
generation. We train an object detector, CellFinder, to automatically detect
cells and prompt SAM to generate segmentations. We show that this approach
allows a single model to achieve state-of-the-art performance for segmenting
images of mammalian cells (in tissues and cell culture), yeast, and bacteria
collected with various imaging modalities. To enable accessibility, we
integrate CellSAM into DeepCell Label to further accelerate human-in-the-loop
labeling strategies for cellular imaging data. A deployed version of CellSAM is
available at https://label-dev.deepcell.org/
DeepCell Kiosk: scaling deep learningâenabled cellular image analysis with Kubernetes
Deep learning is transforming the analysis of biological images, but applying these models to large datasets remains challenging. Here we describe the DeepCell Kiosk, cloud-native software that dynamically scales deep learning workflows to accommodate large imaging datasets. To demonstrate the scalability and affordability of this software, we identified cell nuclei in 10ⶠ1-megapixel images in ~5.5âh for ~US100 achievable depending on cluster configuration. The DeepCell Kiosk can be downloaded at https://github.com/vanvalenlab/kiosk-console; a persistent deployment is available at https://deepcell.org/
Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning
Live-cell imaging experiments have opened an exciting window into the behavior of living systems. While these experiments can produce rich data, the computational analysis of these datasets is challenging. Single-cell analysis requires that cells be accurately identified in each image and subsequently tracked over time. Increasingly, deep learning is being used to interpret microscopy image with single cell resolution. In this work, we apply deep learning to the problem of tracking single cells in live-cell imaging data. Using crowdsourcing and a human-in-the-loop approach to data annotation, we constructed a dataset of over 11,000 trajectories of cell nuclei that includes lineage information. Using this dataset, we successfully trained a deep learning model to perform cell tracking within a linear programming framework. Benchmarking tests demonstrate that our method achieves state-of-the-art performance on the task of cell tracking with respect to multiple accuracy metrics. Further, we show that our deep learning-based method generalizes to perform cell tracking for both fluorescent and brightfield images of the cell cytoplasm, despite having never been trained those data types. This enables analysis of live-cell imaging data collected across imaging modalities. A persistent cloud deployment of our cell tracker is available at http://www.deepcell.org
- âŠ