59 research outputs found

    Comparison of Machine Learning Methods for Classification of Alexithymia in Individuals With and Without Autism from Eye-Tracking Data

    Get PDF
    Alexithymia describes a psychological state where individuals struggle with feeling and expressing their emotions. Individuals with alexithymia may also have a more difficult time understanding the emotions of others and may express atypical attention to the eyes when recognizing emotions. This is known to affect individuals with Autism Spectrum Disorder (ASD) differently than neurotypical (NT) individuals. Using a public data set of eye-tracking data from seventy individuals with and without autism who have been assessed for alexithymia, we train multiple traditional machine learning models for alexithymia classification including support vector machines, logistic regression, decision trees, random forest, and multilayer perceptron. To correct for class imbalance, we evaluate four different oversampling strategies: no oversampling, random oversampling, SMOTE, and ADASYN. We consider three different groups of data: ASD, NT, and combined ASD+NT. We use a nested leave-one-out cross validation strategy to perform hyperparameter selection and evaluate model performance. We achieve F1 scores of 90.00% and 51.85% using decision trees for ASD and NT groups, respectively, and 72.41% using SVM for the combined ASD+NT group. Splitting the data into ASD and NT groups improves recall for both groups compared to the combined model

    Context Aware Deep Learning for Brain Tumor Segmentation, Subtype Classification, and Survival Prediction Using Radiology Images

    Get PDF
    A brain tumor is an uncontrolled growth of cancerous cells in the brain. Accurate segmentation and classification of tumors are critical for subsequent prognosis and treatment planning. This work proposes context aware deep learning for brain tumor segmentation, subtype classification, and overall survival prediction using structural multimodal magnetic resonance images (mMRI). We first propose a 3D context aware deep learning, that considers uncertainty of tumor location in the radiology mMRI image sub-regions, to obtain tumor segmentation. We then apply a regular 3D convolutional neural network (CNN) on the tumor segments to achieve tumor subtype classification. Finally, we perform survival prediction using a hybrid method of deep learning and machine learning. To evaluate the performance, we apply the proposed methods to the Multimodal Brain Tumor Segmentation Challenge 2019 (BraTS 2019) dataset for tumor segmentation and overall survival prediction, and to the dataset of the Computational Precision Medicine Radiology-Pathology (CPM-RadPath) Challenge on Brain Tumor Classification 2019 for tumor classification. We also perform an extensive performance evaluation based on popular evaluation metrics, such as Dice score coefficient, Hausdorff distance at percentile 95 (HD95), classification accuracy, and mean square error. The results suggest that the proposed method offers robust tumor segmentation and survival prediction, respectively. Furthermore, the tumor classification results in this work is ranked at second place in the testing phase of the 2019 CPM-RadPath global challenge

    Opioid Use Disorder Prediction Using Machine Learning of fMRI Data

    Get PDF
    According to the Centers for Disease Control and Prevention (CDC) more than 932,000 people in the US have died since 1999 from a drug overdose. Just about 75% of drug overdose deaths in 2020 involved Opioid, which suggests that the US is in an Opioid overdose epidemic. Identifying individuals likely to develop Opioid use disorder (OUD) can help public health in planning effective prevention, intervention, drug overdose and recovery policies. Further, a better understanding of prediction of overdose leading to the neurobiology of OUD may lead to new therapeutics. In recent years, very limited work has been done using statistical analysis of functional magnetic resonance imaging (fMRI) methods to analyze the neurobiology of Opioid addictions in humans. In this work, for the first time in the literature, we propose a machine learning (ML) framework to predict OUD users utilizing clinical fMRI-BOLD (Blood oxygen level dependent) signal from OUD users and healthy controls (HC). We first obtain the features and validate these with those extracted from selected brain subcortical areas identified in our previous statistical analysis of the fMRI-BOLD signal discriminating OUD subjects from that of the HC. The selected features from three representative brain areas such as default mode network (DMN), salience network (SN), and executive control network (ECN) for both OUD participants and HC subjects are then processed for OUD and HC subjects’ prediction. Our leave one out cross validated results with sixty-nine OUD and HC cases show 88.40% prediction accuracies. These results suggest that the proposed techniques may be utilized to gain a greater understanding of the neurobiology of OUD leading to novel therapeutic development

    Facial Landmark Feature Fusion in Transfer Learning of Child Facial Expressions

    Get PDF
    Automatic classification of child facial expressions is challenging due to the scarcity of image samples with annotations. Transfer learning of deep convolutional neural networks (CNNs), pretrained on adult facial expressions, can be effectively finetuned for child facial expression classification using limited facial images of children. Recent work inspired by facial age estimation and age-invariant face recognition proposes a fusion of facial landmark features with deep representation learning to augment facial expression classification performance. We hypothesize that deep transfer learning of child facial expressions may also benefit from fusing facial landmark features. Our proposed model architecture integrates two input branches: a CNN branch for image feature extraction and a fully connected branch for processing landmark-based features. The model-derived features of these two branches are concatenated into a latent feature vector for downstream expression classification. The architecture is trained on an adult facial expression classification task. Then, the trained model is finetuned to perform child facial expression classification. The combined feature fusion and transfer learning approach is compared against multiple models: training on adult expressions only (adult baseline), child expression only (child baseline), and transfer learning from adult to child data. We also evaluate the classification performance of feature fusion without transfer learning on model performance. Training on child data, we find that feature fusion improves the 10-fold cross validation mean accuracy from 80.32% to 83.72% with similar variance. Proposed fine-tuning with landmark feature fusion of child expressions yields the best mean accuracy of 85.14%, a more than 30% improvement over the adult baseline and nearly 5% improvement over the child baseline

    Deep Learning with Context Encoding for Semantic Brain Tumor Segmentation and Patient Survival Prediction

    Get PDF
    One of the most challenging problems encountered in deep learning-based brain tumor segmentation models is the misclassification of tumor tissue classes due to the inherent imbalance in the class representation. Consequently, strong regularization methods are typically considered when training large-scale deep learning models for brain tumor segmentation to overcome undue bias towards representative tissue types. However, these regularization methods tend to be computationally exhaustive, and may not guarantee the learning of features representing all tumor tissue types that exist in the input MRI examples. Recent work in context encoding with deep CNN models have shown promise for semantic segmentation of natural scenes, with particular improvements in small object segmentation due to improved representative feature learning. Accordingly, we propose a novel, efficient 3DCNN based deep learning framework with context encoding for semantic brain tumor segmentation using multimodal magnetic resonance imaging (mMRI). The context encoding module in the proposed model enforces rich, class-dependent feature learning to improve the overall multi-label segmentation performance. We subsequently utilize context augmented features in a machine-learning based survival prediction pipeline to improve the prediction performance. The proposed method is evaluated using the publicly available 2019 Brain Tumor Segmentation (BraTS) and survival prediction challenge dataset. The results show that the proposed method significantly improves the tumor tissue segmentation performance and the overall survival prediction performance

    Optical and Hybrid Imaging and Processing for Big Data Problems

    Get PDF
    (First paragraph) The scientific community has been dealing with big data for a long time. Due to advancement in sensing, networking, and storage technology, other domains such as business, health, and social media followed. Data are considered the gold of the 21st century and are being collected, stored, and analyzed at a rapid pace. The amount of data being collected creates a compelling case for investing in hardware and software research to support generating even more data from new sensors and with better quality. It also creates a compelling case for investing in research and development of new hardware and software for data analytics. This special section of Optical Engineering explores the optical and hybrid imaging and processing technology that will enable capturing and analyzing large amounts of data or help stream the data for further exploration and analysis

    Joint Modeling of RNAseq and Radiomics Data for Glioma Molecular Characterization and Prediction

    Get PDF
    RNA sequencing (RNAseq) is a recent technology that profiles gene expression by measuring the relative frequency of the RNAseq reads. RNAseq read counts data is increasingly used in oncologic care and while radiology features (radiomics) have also been gaining utility in radiology practice such as disease diagnosis, monitoring, and treatment planning. However, contemporary literature lacks appropriate RNA-radiomics (henceforth, radiogenomics) joint modeling where RNAseq distribution is adaptive and also preserves the nature of RNAseq read counts data for glioma grading and prediction. The Negative Binomial (NB) distribution may be useful to model RNAseq read counts data that addresses potential shortcomings. In this study, we propose a novel radiogenomics-NB model for glioma grading and prediction. Our radiogenomics-NB model is developed based on differentially expressed RNAseq and selected radiomics/volumetric features which characterize tumor volume and sub-regions. The NB distribution is fitted to RNAseq counts data, and a log-linear regression model is assumed to link between the estimated NB mean and radiomics. Three radiogenomics-NB molecular mutation models (e.g., IDH mutation, 1p/19q codeletion, and ATRX mutation) are investigated. Additionally, we explore gender-specific effects on the radiogenomics-NB models. Finally, we compare the performance of the proposed three mutation prediction radiogenomics-NB models with different well-known methods in the literature: Negative Binomial Linear Discriminant Analysis (NBLDA), differentially expressed RNAseq with Random Forest (RF-genomics), radiomics and differentially expressed RNAseq with Random Forest (RF-radiogenomics), and Voom-based count transformation combined with the nearest shrinkage classifier (VoomNSC). Our analysis shows that the proposed radiogenomics-NB model significantly outperforms (ANOVA test, p \u3c 0.05) for prediction of IDH and ATRX mutations and offers similar performance for prediction of 1p/19q codeletion, when compared to the competing models in the literature, respectively
    • …
    corecore