100 research outputs found

    Multiclass Yeast Segmentation in Microstructured Environments with Deep Learning

    Full text link
    Cell segmentation is a major bottleneck in extracting quantitative single-cell information from microscopy data. The challenge is exasperated in the setting of microstructured environments. While deep learning approaches have proven useful for general cell segmentation tasks, existing segmentation tools for the yeast-microstructure setting rely on traditional machine learning approaches. Here we present convolutional neural networks trained for multiclass segmenting of individual yeast cells and discerning these from cell-similar microstructures. We give an overview of the datasets recorded for training, validating and testing the networks, as well as a typical use-case. We showcase the method's contribution to segmenting yeast in microstructured environments with a typical synthetic biology application in mind. The models achieve robust segmentation results, outperforming the previous state-of-the-art in both accuracy and speed. The combination of fast and accurate segmentation is not only beneficial for a posteriori data processing, it also makes online monitoring of thousands of trapped cells or closed-loop optimal experimental design feasible from an image processing perspective.Comment: IEEE CIBCB 2020 (accepted

    Multi-stream Cell Segmentation with Low-level Cues for Multi-modality Images

    Full text link
    Cell segmentation for multi-modal microscopy images remains a challenge due to the complex textures, patterns, and cell shapes in these images. To tackle the problem, we first develop an automatic cell classification pipeline to label the microscopy images based on their low-level image characteristics, and then train a classification model based on the category labels. Afterward, we train a separate segmentation model for each category using the images in the corresponding category. Besides, we further deploy two types of segmentation models to segment cells with roundish and irregular shapes respectively. Moreover, an efficient and powerful backbone model is utilized to enhance the efficiency of our segmentation model. Evaluated on the Tuning Set of NeurIPS 2022 Cell Segmentation Challenge, our method achieves an F1-score of 0.8795 and the running time for all cases is within the time tolerance.Comment: The second place in NeurIPS 2022 cell segmentation challenge (https://neurips22-cellseg.grand-challenge.org/), released code: https://github.com/lhaof/CellSe

    Computer Vision Approaches for Mapping Gene Expression onto Lineage Trees

    Get PDF
    This project concerns studying the early development of living organisms. This period is accompanied by dynamic morphogenetic events. There is an increase in the number of cells, changes in the shape of cells and specification of cell fate during this time. Typically, in order to capture the dynamic morphological changes, one can employ a form of microscopy imaging such as Selective Plane Illumination Microscopy (SPIM) which offers a single-cell resolution across time, and hence allows observing the positions, velocities and trajectories of most cells in a developing embryo. Unfortunately, the dynamic genetic activity which underlies these morphological changes and influences cellular fate decision, is captured only as static snapshots and often requires processing (sequencing or imaging) multiple distinct individuals. In order to set the stage for characterizing the factors which influence cellular fate, one must bring the data arising from the above-mentioned static snapshots of multiple individuals and the data arising from SPIM imaging of other distinct individual(s) which characterizes the changes in morphology, into the same frame of reference. In this project, a computational pipeline is established, which achieves the aforementioned goal of mapping data from these various imaging modalities and specimens to a canonical frame of reference. This pipeline relies on the three core building blocks of Instance Segmentation, Tracking and Registration. In this dissertation work, I introduce EmbedSeg which is my solution to performing instance segmentation of 2D and 3D (volume) image data. Next, I introduce LineageTracer which is my solution to performing tracking of a time-lapse (2d+t, 3d+t) recording. Finally, I introduce PlatyMatch which is my solution to performing registration of volumes. Errors from the application of these building blocks accumulate which produces a noisy observation estimate of gene expression for the digitized cells in the canonical frame of reference. These noisy estimates are processed to infer the underlying hidden state by using a Hidden Markov Model (HMM) formulation. Lastly, for wider dissemination of these methods, one requires an effective visualization strategy. A few details about the employed approach are also discussed in the dissertation work. The pipeline was designed keeping imaging volume data in mind, but can easily be extended to incorporate other data modalities, if available, such as single cell RNA Sequencing (scRNA-Seq) (more details are provided in the Discussion chapter). The methods elucidated in this dissertation would provide a fertile playground for several experiments and analyses in the future. Some of such potential experiments and current weaknesses of the computational pipeline are also discussed additionally in the Discussion Chapter

    Comparison of Artificial Intelligence based approaches to cell function prediction

    Get PDF
    Predicting Retinal Pigment Epithelium (RPE) cell functions in stem cell implants using non-invasive bright field microscopy imaging is a critical task for clinical deployment of stem cell therapies. Such cell function predictions can be carried out using Artificial Intelligence (AI) based models. In this paper we used Traditional Machine Learning (TML) and Deep Learning (DL) based AI models for cell function prediction tasks. TML models depend on feature engineering and DL models perform feature engineering automatically but have higher modeling complexity. This work aims at exploring the tradeoffs between three approaches using TML and DL based models for RPE cell function prediction from microscopy images and at understanding the accuracy relationship between pixel-, cell feature-, and implant label-level accuracies of models. Among the three compared approaches to cell function prediction, the direct approach to cell function prediction from images is slightly more accurate in comparison to indirect approaches using intermediate segmentation and/or feature engineering steps. We also evaluated accuracy variations with respect to model selections (five TML models and two DL models) and model configurations (with and without transfer learning). Finally, we quantified the relationships between segmentation accuracy and the number of samples used for training a model, segmentation accuracy and cell feature error, and cell feature error and accuracy of implant labels. We concluded that for the RPE cell data set, there is a monotonic relationship between the number of training samples and image segmentation accuracy, and between segmentation accuracy and cell feature error, but there is no such a relationship between segmentation accuracy and accuracy of RPE implant labels

    Computational Image Analysis For Axonal Transport, Phenotypic Profiling, And Digital Pathology

    Get PDF
    Recent advances in fluorescent probes, microscopy, and imaging platforms have revolutionized biology and medicine, generating multi-dimensional image datasets at unprecedented scales. Traditional, low-throughput methods of image analysis are inadequate to handle the increased “volume, velocity, and variety” that characterize the realm of big data. Thus, biomedical imaging requires a new set of tools, which include advanced computer vision and machine learning algorithms. In this work, we develop computational image analysis solutions to biological questions at the level of single-molecules, cells, and tissues. At the molecular level, we dissect the regulation of dynein-dynactin transport initiation using in vitro reconstitution, single-particle tracking, super-resolution microscopy, live-cell imaging in neurons, and computational modeling. We show that at least two mechanisms regulate dynein transport initiation neurons: (1) cytoplasmic linker proteins, which are regulated by phosphorylation, increase the capture radius around the microtubule, thus reducing the time cargo spends in a diffusive search; and (2) a spatial gradient of tyrosinated alpha-tubulin enriched in the distal axon increases the affinity of dynein-dynactin for microtubules. Together, these mechanisms support a multi-modal recruitment model where interacting layers of regulation provide efficient, robust, and spatiotemporal control of transport initiation. At the cellular level, we develop and train deep residual convolutional neural networks on a large and diverse set of cellular microscopy images. Then, we apply networks trained for one task as deep feature extractors for unsupervised phenotypic profiling in a different task. We show that neural networks trained on one dataset encode robust image phenotypes that are sufficient to cluster subcellular structures by type and separate drug compounds by the mechanism of action, without additional training, supporting the strength and flexibility of this approach. Future applications include phenotypic profiling in image-based screens, where clustering genetic or drug treatments by image phenotypes may reveal novel relationships among genetic or pharmacologic pathways. Finally, at the tissue level, we apply deep learning pipelines in digital pathology to segment cardiac tissue and classify clinical heart failure using whole-slide images of cardiac histopathology. Together, these results demonstrate the power and promise of computational image analysis, computer vision, and deep learning in biological image analysis
    corecore