1,151 research outputs found

    A Wide and Deep Neural Network for Survival Analysis from Anatomical Shape and Tabular Clinical Data

    Full text link
    We introduce a wide and deep neural network for prediction of progression from patients with mild cognitive impairment to Alzheimer's disease. Information from anatomical shape and tabular clinical data (demographics, biomarkers) are fused in a single neural network. The network is invariant to shape transformations and avoids the need to identify point correspondences between shapes. To account for right censored time-to-event data, i.e., when it is only known that a patient did not develop Alzheimer's disease up to a particular time point, we employ a loss commonly used in survival analysis. Our network is trained end-to-end to combine information from a patient's hippocampus shape and clinical biomarkers. Our experiments on data from the Alzheimer's Disease Neuroimaging Initiative demonstrate that our proposed model is able to learn a shape descriptor that augments clinical biomarkers and outperforms a deep neural network on shape alone and a linear model on common clinical biomarkers.Comment: Data and Machine Learning Advances with Multiple Views Workshop, ECML-PKDD 201

    Multimodal machine learning in medical screenings

    Get PDF
    The healthcare industry, with its high demand and standards, has long been considered a crucial area for technology-based innovation. However, the medical field often relies on experience-based evaluation. Limited resources, overloading capacity, and a lack of accessibility can hinder timely medical care and diagnosis delivery. In light of these challenges, automated medical screening as a decision-making aid is highly recommended. With the increasing availability of data and the need to explore the complementary effect among modalities, multimodal machine learning has emerged as a potential area of technology. Its impact has been witnessed across a wide range of domains, prompting the question of how far machine learning can be leveraged to automate processes in even more complex and high-risk sectors. This paper delves into the realm of multimodal machine learning in the field of automated medical screening and evaluates the potential of this area of study in mental disorder detection, a highly important area of healthcare. First, we conduct a scoping review targeted at high-impact papers to highlight the trends and directions of multimodal machine learning in screening prevalent mental disorders such as depression, stress, and bipolar disorder. The review provides a comprehensive list of popular datasets and extensively studied modalities. The review also proposes an end-to-end pipeline for multimodal machine learning applications, covering essential steps from preprocessing, representation, and fusion, to modelling and evaluation. While cross-modality interaction has been considered a promising factor to leverage fusion among multimodalities, the number of existing multimodal fusion methods employing this mechanism is rather limited. This study investigates multimodal fusion in more detail through the proposal of Autofusion, an autoencoder-infused fusion technique that harnesses the cross-modality interaction among different modalities. The technique is evaluated on DementiaBank’s Pitt corpus to detect Alzheimer’s disease, leveraging the power of cross-modality interaction. Autofusion achieves a promising performance of 79.89% in accuracy, 83.85% in recall, 81.72% in precision, and 82.47% in F1. The technique consistently outperforms all unimodal methods by an average of 5.24% across all metrics. Our method consistently outperforms early fusion and late fusion. Especially against the late fusion hard-voting technique, our method outperforms by an average of 20% across all metrics. Further, empirical results show that the cross-modality interaction term enhances the model performance by 2-3% across metrics. This research highlights the promising impact of cross-modality interaction in multimodal machine learning and calls for further research to unlock its full potential

    Towards AI-Assisted Disease Diagnosis: Learning Deep Feature Representations for Medical Image Analysis

    Get PDF
    Artificial Intelligence (AI) has impacted our lives in many meaningful ways. For our research, we focus on improving disease diagnosis systems by analyzing medical images using AI, specifically deep learning technologies. The recent advances in deep learning technologies are leading to enhanced performance for medical image analysis and computer-aided disease diagnosis. In this dissertation, we explore a major research area in medical image analysis - Image classification. Image classification is the process to assign an image a label from a fixed set of categories. For our research, we focus on the problem of Alzheimer\u27s Disease (AD) diagnosis from 3D structural Magnetic Resonance Imaging (sMRI) and Positron Emission Tomography (PET) brain scans. Alzheimer\u27s Disease is a severe neurological disorder. In this dissertation, we address challenges related to Alzheimer\u27s Disease diagnosis and propose several models for improved diagnosis. We focus on analyzing the 3D Structural MRI (sMRI) and Positron Emission Tomography (PET) brain scans to identify the current stage of Alzheimer\u27s Disease: Normal Controls (CN), Mild Cognitive Impairment (MCI), and Alzheimer\u27s Disease (AD). This dissertation demonstrates ways to improve the performance of a Convolutional Neural Network (CNN) for Alzheimer\u27s Disease diagnosis. Besides, we present approaches to solve the class-imbalance problem and improving classification performance with limited training data for medical image analysis. To understand the decision of the CNN, we present methods to visualize the behavior of a CNN model for disease diagnosis. As a case study, we analyzed brain PET scans of AD and CN patients to see how CNN discriminates among data samples of different classes. Additionally, this dissertation proposes a novel approach to generate synthetic medical images using Generative Adversarial Networks (GANs). Working with the limited dataset and small amount of annotated samples makes it difficult to develop a robust automated disease diagnosis model. Our proposed model can solve such issue and generate brain MRI and PET images for three different stages of Alzheimer\u27s Disease - Normal Control (CN), Mild Cognitive Impairment (MCI), and Alzheimer\u27s Disease (AD). Our proposed approach can be generalized to create synthetic data for other medical image analysis problems and help to develop better disease diagnosis model

    MC-ViViT: Multi-branch Classifier-ViViT to Detect Mild Cognitive Impairment in Older Adults using Facial Videos

    Full text link
    Deep machine learning models including Convolutional Neural Networks (CNN) have been successful in the detection of Mild Cognitive Impairment (MCI) using medical images, questionnaires, and videos. This paper proposes a novel Multi-branch Classifier-Video Vision Transformer (MC-ViViT) model to distinguish MCI from those with normal cognition by analyzing facial features. The data comes from the I-CONECT, a behavioral intervention trial aimed at improving cognitive function by providing frequent video chats. MC-ViViT extracts spatiotemporal features of videos in one branch and augments representations by the MC module. The I-CONECT dataset is challenging as the dataset is imbalanced containing Hard-Easy and Positive-Negative samples, which impedes the performance of MC-ViViT. We propose a loss function for Hard-Easy and Positive-Negative Samples (HP Loss) by combining Focal loss and AD-CORRE loss to address the imbalanced problem. Our experimental results on the I-CONECT dataset show the great potential of MC-ViViT in predicting MCI with a high accuracy of 90.63\% accuracy on some of the interview videos.Comment: 12 pages, 5 tables, 5 figures, 17 equation

    Alzheimers Disease Diagnosis using Machine Learning: A Review

    Full text link
    Alzheimers Disease AD is an acute neuro disease that degenerates the brain cells and thus leads to memory loss progressively. It is a fatal brain disease that mostly affects the elderly. It steers the decline of cognitive and biological functions of the brain and shrinks the brain successively, which in turn is known as Atrophy. For an accurate diagnosis of Alzheimers disease, cutting edge methods like machine learning are essential. Recently, machine learning has gained a lot of attention and popularity in the medical industry. As the illness progresses, those with Alzheimers have a far more difficult time doing even the most basic tasks, and in the worst case, their brain completely stops functioning. A persons likelihood of having early-stage Alzheimers disease may be determined using the ML method. In this analysis, papers on Alzheimers disease diagnosis based on deep learning techniques and reinforcement learning between 2008 and 2023 found in google scholar were studied. Sixty relevant papers obtained after the search was considered for this study. These papers were analysed based on the biomarkers of AD and the machine-learning techniques used. The analysis shows that deep learning methods have an immense ability to extract features and classify AD with good accuracy. The DRL methods have not been used much in the field of image processing. The comparison results of deep learning and reinforcement learning illustrate that the scope of Deep Reinforcement Learning DRL in dementia detection needs to be explored.Comment: 10 pages and 3 figure

    Deep Interpretability Methods for Neuroimaging

    Get PDF
    Brain dynamics are highly complex and yet hold the key to understanding brain function and dysfunction. The dynamics captured by resting-state functional magnetic resonance imaging data are noisy, high-dimensional, and not readily interpretable. The typical approach of reducing this data to low-dimensional features and focusing on the most predictive features comes with strong assumptions and can miss essential aspects of the underlying dynamics. In contrast, introspection of discriminatively trained deep learning models may uncover disorder-relevant elements of the signal at the level of individual time points and spatial locations. Nevertheless, the difficulty of reliable training on high-dimensional but small-sample datasets and the unclear relevance of the resulting predictive markers prevent the widespread use of deep learning in functional neuroimaging. In this dissertation, we address these challenges by proposing a deep learning framework to learn from high-dimensional dynamical data while maintaining stable, ecologically valid interpretations. The developed model is pre-trainable and alleviates the need to collect an enormous amount of neuroimaging samples to achieve optimal training. We also provide a quantitative validation module, Retain and Retrain (RAR), that can objectively verify the higher predictability of the dynamics learned by the model. Results successfully demonstrate that the proposed framework enables learning the fMRI dynamics directly from small data and capturing compact, stable interpretations of features predictive of function and dysfunction. We also comprehensively reviewed deep interpretability literature in the neuroimaging domain. Our analysis reveals the ongoing trend of interpretability practices in neuroimaging studies and identifies the gaps that should be addressed for effective human-machine collaboration in this domain. This dissertation also proposed a post hoc interpretability method, Geometrically Guided Integrated Gradients (GGIG), that leverages geometric properties of the functional space as learned by a deep learning model. With extensive experiments and quantitative validation on MNIST and ImageNet datasets, we demonstrate that GGIG outperforms integrated gradients (IG), which is considered to be a popular interpretability method in the literature. As GGIG is able to identify the contours of the discriminative regions in the input space, GGIG may be useful in various medical imaging tasks where fine-grained localization as an explanation is beneficial

    Latent feature representation with stacked auto-encoder for AD/MCI diagnosis

    Get PDF
    Recently, there have been great interests for computer-aided diagnosis of Alzheimer’s disease (AD) and its prodromal stage, mild cognitive impairment (MCI). Unlike the previous methods that considered simple low-level features such as gray matter tissue volumes from MRI, and mean signal intensities from PET, in this paper, we propose a deep learning-based latent feature representation with a stacked auto-encoder (SAE). We believe that there exist latent non-linear complicated patterns inherent in the low-level features such as relations among features. Combining the latent information with the original features helps build a robust model in AD/MCI classification, with high diagnostic accuracy. Furthermore, thanks to the unsupervised characteristic of the pre-training in deep learning, we can benefit from the target-unrelated samples to initialize parameters of SAE, thus finding optimal parameters in fine-tuning with the target-related samples, and further enhancing the classification performances across four binary classification problems: AD vs. healthy normal control (HC), MCI vs. HC, AD vs. MCI, and MCI converter (MCI-C) vs. MCI non-converter (MCI-NC). In our experiments on ADNI dataset, we validated the effectiveness of the proposed method, showing the accuracies of 98.8, 90.7, 83.7, and 83.3 % for AD/HC, MCI/HC, AD/MCI, and MCI-C/MCI-NC classification, respectively. We believe that deep learning can shed new light on the neuroimaging data analysis, and our work presented the applicability of this method to brain disease diagnosis
    • …
    corecore