3,328 research outputs found

    Enhancing Alzheimer's Detection Using a Multi-Modal Approach Hybrid Features Extraction Technique from MRI Images

    Get PDF
    The neurodegenerative illness Alzheimer's, which affects millions of people worldwide, poses significant obstacles to early detection and efficient treatment. The non-invasive technique of magnetic resonance imaging (MRI) has shown promise in identifying structural abnormalities in the brain linked to Alzheimer's disease. To address the complexity of Alzheimer's detection and enhance accuracy, this study proposes a novel hybrid feature extraction method that combines Convolutional Neural Networks (CNN), Local Binary Patterns (LBP), and Scale-Invariant Feature Transform (SIFT). After the feature extraction, PSO (Particle Swarm Optimization) and ABC (Ant Bee Colony) were applied for optimization. In this research, a dataset comprising MRI brain images from healthy individuals and Alzheimer's patients was curated. Preprocessing techniques were applied to enhance image quality and remove noise. The hybrid feature extraction method was then employed to extract distinctive and complementary features from the MRI images

    Multimodal and Multiscale Deep Neural Networks for the Early Diagnosis of Alzheimer's Disease using structural MR and FDG-PET images.

    Get PDF
    Alzheimer's Disease (AD) is a progressive neurodegenerative disease where biomarkers for disease based on pathophysiology may be able to provide objective measures for disease diagnosis and staging. Neuroimaging scans acquired from MRI and metabolism images obtained by FDG-PET provide in-vivo measurements of structure and function (glucose metabolism) in a living brain. It is hypothesized that combining multiple different image modalities providing complementary information could help improve early diagnosis of AD. In this paper, we propose a novel deep-learning-based framework to discriminate individuals with AD utilizing a multimodal and multiscale deep neural network. Our method delivers 82.4% accuracy in identifying the individuals with mild cognitive impairment (MCI) who will convert to AD at 3 years prior to conversion (86.4% combined accuracy for conversion within 1-3 years), a 94.23% sensitivity in classifying individuals with clinical diagnosis of probable AD, and a 86.3% specificity in classifying non-demented controls improving upon results in published literature

    Novel Deep Learning Models for Medical Imaging Analysis

    Get PDF
    abstract: Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimer’s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patients’ conversion to AD with 3D MRI images.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Alzheimer Detection System Using Hybrid Deep Convolutional Neural Network

    Get PDF
    Alzheimer’s disease of the sixth leading causes of death in the United States of America is projected to grow to the third place of all causes of death for the elderly soon to cancer and heart decease. Timely detection and prevention are crucial to it. AD detection is based on multiple medical examinations which all lead to extensive multivariate heterogeneous data. This factor makes manual comparison, evaluation, and analysis hardly possible. The hereby study proposes a new approach to the detection of AD at the earliest stage hybrid deep learning algorithms. Several feature extraction and selection draw possible features. The method involves InceptionV3 and DenseNet for both pre-processing and classification tasks, while MobileNet enables data pre-processing and object detection. Experimental results with 100 epochs and 15 hidden layers show InceptionV3 has an accuracy of 98%, which outperforms other models available. The comparative analysis with other CNN models endorses the proposed method, achieving the highest performance across the board from our system

    MildInt: Deep Learning-Based Multimodal Longitudinal Data Integration Framework

    Get PDF
    As large amounts of heterogeneous biomedical data become available, numerous methods for integrating such datasets have been developed to extract complementary knowledge from multiple domains of sources. Recently, a deep learning approach has shown promising results in a variety of research areas. However, applying the deep learning approach requires expertise for constructing a deep architecture that can take multimodal longitudinal data. Thus, in this paper, a deep learning-based python package for data integration is developed. The python package deep learning-based multimodal longitudinal data integration framework (MildInt) provides the preconstructed deep learning architecture for a classification task. MildInt contains two learning phases: learning feature representation from each modality of data and training a classifier for the final decision. Adopting deep architecture in the first phase leads to learning more task-relevant feature representation than a linear model. In the second phase, linear regression classifier is used for detecting and investigating biomarkers from multimodal data. Thus, by combining the linear model and the deep learning model, higher accuracy and better interpretability can be achieved. We validated the performance of our package using simulation data and real data. For the real data, as a pilot study, we used clinical and multimodal neuroimaging datasets in Alzheimer's disease to predict the disease progression. MildInt is capable of integrating multiple forms of numerical data including time series and non-time series data for extracting complementary features from the multimodal dataset
    • …
    corecore