3 research outputs found

    Comparison of Artificial Intelligence based approaches to cell function prediction

    Get PDF
    Predicting Retinal Pigment Epithelium (RPE) cell functions in stem cell implants using non-invasive bright field microscopy imaging is a critical task for clinical deployment of stem cell therapies. Such cell function predictions can be carried out using Artificial Intelligence (AI) based models. In this paper we used Traditional Machine Learning (TML) and Deep Learning (DL) based AI models for cell function prediction tasks. TML models depend on feature engineering and DL models perform feature engineering automatically but have higher modeling complexity. This work aims at exploring the tradeoffs between three approaches using TML and DL based models for RPE cell function prediction from microscopy images and at understanding the accuracy relationship between pixel-, cell feature-, and implant label-level accuracies of models. Among the three compared approaches to cell function prediction, the direct approach to cell function prediction from images is slightly more accurate in comparison to indirect approaches using intermediate segmentation and/or feature engineering steps. We also evaluated accuracy variations with respect to model selections (five TML models and two DL models) and model configurations (with and without transfer learning). Finally, we quantified the relationships between segmentation accuracy and the number of samples used for training a model, segmentation accuracy and cell feature error, and cell feature error and accuracy of implant labels. We concluded that for the RPE cell data set, there is a monotonic relationship between the number of training samples and image segmentation accuracy, and between segmentation accuracy and cell feature error, but there is no such a relationship between segmentation accuracy and accuracy of RPE implant labels

    Modeling and Analysis of Subcellular Protein Localization in Hyper-Dimensional Fluorescent Microscopy Images Using Deep Learning Methods

    Full text link
    Hyper-dimensional images are informative and become increasingly common in biomedical research. However, the machine learning methods of studying and processing the hyper-dimensional images are underdeveloped. Most of the methods only model the mapping functions between input and output by focusing on the spatial relationship, whereas neglect the temporal and causal relationships. In many cases, the spatial, temporal, and causal relationships are correlated and become a relationship complex. Therefore, only modeling the spatial relationship may result in inaccurate mapping function modeling and lead to undesired output. Despite the importance, there are multiple challenges on modeling the relationship complex, including the model complexity and the data availability. The objective of this dissertation is to comprehensively study the mapping function modeling of the spatial-temporal and the spatial-temporal-causal relationship in hyper-dimensional data with deep learning approaches. The modeling methods are expected to accurately capture the complex relationships in class-level and object-level so that new image processing tools can be developed based on the methods to study the relationships between targets in hyper-dimensional data. In this dissertation, four different cases of relationship complex are studied, including the class-level spatial-temporal-causal relationship and spatial-temporal relationship modeling, and the object-level spatial-temporal-causal relationship and spatial-temporal relationship modeling. The modelings are achieved by deep learning networks that implicitly model the mapping functions with network weight matrix. For spatial-temporal relationship, because the cause factor information is unavailable, discriminative modeling that only relies on available information is studied. For class-level and object-level spatial-temporal-causal relationship, generative modeling is studied with a new deep learning network and three new tools proposed. For spatial-temporal relationship modeling, a state-of-the-art segmentation network has been found to be the best performer over 18 networks. Based on accurate segmentation, we study the object-level temporal dynamics and interactions through dynamics tracking. The multi-object portion tracking (MOPT) method allows object tracking in subcellular level and identifies object events, including object born, dead, split, and fusion. The tracking results is 2.96% higher on consistent tracking accuracy and 35.48% higher on event identification accuracy, compared with the existing state-of-the-art tracking methods. For spatial-temporal-causal relationship modeling, the proposed four-dimensional reslicing generative adversarial network (4DR-GAN) captures the complex relationships between the input and the target proteins. The experimental results on four groups of proteins demonstrate the efficacy of 4DR-GAN compared with the widely used Pix2Pix network. On protein localization prediction (PLP), the predicted localization from 4DR-GAN is more accurate in subcellular localization, temporal consistency, and dynamics. Based on efficient PLP, the digital activation (DA) and digital inactivation (DI) tools allow precise spatial and temporal control on global and local localization manipulation. They allow researchers to study the protein functions and causal relationships by observing the digital manipulation and PLP output response
    corecore