539 research outputs found
End-To-End Alzheimer's Disease Diagnosis and Biomarker Identification
As shown in computer vision, the power of deep learning lies in automatically
learning relevant and powerful features for any perdition task, which is made
possible through end-to-end architectures. However, deep learning approaches
applied for classifying medical images do not adhere to this architecture as
they rely on several pre- and post-processing steps. This shortcoming can be
explained by the relatively small number of available labeled subjects, the
high dimensionality of neuroimaging data, and difficulties in interpreting the
results of deep learning methods. In this paper, we propose a simple 3D
Convolutional Neural Networks and exploit its model parameters to tailor the
end-to-end architecture for the diagnosis of Alzheimer's disease (AD). Our
model can diagnose AD with an accuracy of 94.1\% on the popular ADNI dataset
using only MRI data, which outperforms the previous state-of-the-art. Based on
the learned model, we identify the disease biomarkers, the results of which
were in accordance with the literature. We further transfer the learned model
to diagnose mild cognitive impairment (MCI), the prodromal stage of AD, which
yield better results compared to other methods
Machine Learning Methods for Image Analysis in Medical Applications, from Alzheimer\u27s Disease, Brain Tumors, to Assisted Living
Healthcare has progressed greatly nowadays owing to technological advances, where machine learning plays an important role in processing and analyzing a large amount of medical data. This thesis investigates four healthcare-related issues (Alzheimer\u27s disease detection, glioma classification, human fall detection, and obstacle avoidance in prosthetic vision), where the underlying methodologies are associated with machine learning and computer vision. For Alzheimer’s disease (AD) diagnosis, apart from symptoms of patients, Magnetic Resonance Images (MRIs) also play an important role. Inspired by the success of deep learning, a new multi-stream multi-scale Convolutional Neural Network (CNN) architecture is proposed for AD detection from MRIs, where AD features are characterized in both the tissue level and the scale level for improved feature learning. Good classification performance is obtained for AD/NC (normal control) classification with test accuracy 94.74%. In glioma subtype classification, biopsies are usually needed for determining different molecular-based glioma subtypes. We investigate non-invasive glioma subtype prediction from MRIs by using deep learning. A 2D multi-stream CNN architecture is used to learn the features of gliomas from multi-modal MRIs, where the training dataset is enlarged with synthetic brain MRIs generated by pairwise Generative Adversarial Networks (GANs). Test accuracy 88.82% has been achieved for IDH mutation (a molecular-based subtype) prediction. A new deep semi-supervised learning method is also proposed to tackle the problem of missing molecular-related labels in training datasets for improving the performance of glioma classification. In other two applications, we also address video-based human fall detection by using co-saliency-enhanced Recurrent Convolutional Networks (RCNs), as well as obstacle avoidance in prosthetic vision by characterizing obstacle-related video features using a Spiking Neural Network (SNN). These investigations can benefit future research, where artificial intelligence/deep learning may open a new way for real medical applications
Vertical Federated Alzheimer's Detection on Multimodal Data
In the era of rapidly advancing medical technologies, the segmentation of
medical data has become inevitable, necessitating the development of privacy
preserving machine learning algorithms that can train on distributed data.
Consolidating sensitive medical data is not always an option particularly due
to the stringent privacy regulations imposed by the Health Insurance
Portability and Accountability Act (HIPAA). In this paper, we introduce a HIPAA
compliant framework that can train from distributed data. We then propose a
multimodal vertical federated model for Alzheimer's Disease (AD) detection, a
serious neurodegenerative condition that can cause dementia, severely impairing
brain function and hindering simple tasks, especially without preventative
care. This vertical federated model offers a distributed architecture that
enables collaborative learning across diverse sources of medical data while
respecting privacy constraints imposed by HIPAA. It is also able to leverage
multiple modalities of data, enhancing the robustness and accuracy of AD
detection. Our proposed model not only contributes to the advancement of
federated learning techniques but also holds promise for overcoming the hurdles
posed by data segmentation in medical research. By using vertical federated
learning, this research strives to provide a framework that enables healthcare
institutions to harness the collective intelligence embedded in their
distributed datasets without compromising patient privacy.Comment: 14 pages, 7 figures, 2 table
Domain Mapping and Deep Learning from Multiple MRI Clinical Datasets for Prediction of Molecular Subtypes in Low Grade Gliomas
Brain tumors, such as low grade gliomas (LGG), are molecularly classified which require the surgical collection of tissue samples. The pre-surgical or non-operative identification of LGG molecular type could improve patient counseling and treatment decisions. However, radiographic approaches to LGG molecular classification are currently lacking, as clinicians are unable to reliably predict LGG molecular type using magnetic resonance imaging (MRI) studies. Machine learning approaches may improve the prediction of LGG molecular classification through MRI, however, the development of these techniques requires large annotated data sets. Merging clinical data from different hospitals to increase case numbers is needed, but the use of different scanners and settings can affect the results and simply combining them into a large dataset often have a significant negative impact on performance. This calls for efficient domain adaption methods. Despite some previous studies on domain adaptations, mapping MR images from different datasets to a common domain without affecting subtitle molecular-biomarker information has not been reported yet. In this paper, we propose an effective domain adaptation method based on Cycle Generative Adversarial Network (CycleGAN). The dataset is further enlarged by augmenting more MRIs using another GAN approach. Further, to tackle the issue of brain tumor segmentation that requires time and anatomical expertise to put exact boundary around the tumor, we have used a tight bounding box as a strategy. Finally, an efficient deep feature learning method, multi-stream convolutional autoencoder (CAE) and feature fusion, is proposed for the prediction of molecular subtypes (1p/19q-codeletion and IDH mutation). The experiments were conducted on a total of 161 patients consisting of FLAIR and T1 weighted with contrast enhanced (T1ce) MRIs from two different institutions in the USA and France. The proposed scheme is shown to achieve the test accuracy of\ua074.81%\ua0on 1p/19q codeletion and\ua081.19%\ua0on IDH mutation, with marked improvement over the results obtained without domain mapping. This approach is also shown to have comparable performance to several state-of-the-art methods
Uncertainty Estimation using the Local Lipschitz for Deep Learning Image Reconstruction Models
The use of supervised deep neural network approaches has been investigated to
solve inverse problems in all domains, especially radiology where imaging
technologies are at the heart of diagnostics. However, in deployment, these
models are exposed to input distributions that are widely shifted from training
data, due in part to data biases or drifts. It becomes crucial to know whether
a given input lies outside the training data distribution before relying on the
reconstruction for diagnosis. The goal of this work is three-fold: (i)
demonstrate use of the local Lipshitz value as an uncertainty estimation
threshold for determining suitable performance, (ii) provide method for
identifying out-of-distribution (OOD) images where the model may not have
generalized, and (iii) use the local Lipschitz values to guide proper data
augmentation through identifying false positives and decrease epistemic
uncertainty. We provide results for both MRI reconstruction and CT sparse view
to full view reconstruction using AUTOMAP and UNET architectures due to it
being pertinent in the medical domain that reconstructed images remain
diagnostically accurate
- …