167 research outputs found

    Early Diagnosis of Mild Cognitive Impairment with 2-Dimensional Convolutional Neural Network Classification of Magnetic Resonance Images

    Get PDF
    We motivate and implement an Artificial Intelligence (AI) Computer Aided Diagnosis (CAD) framework, to assist clinicians in the early diagnosis of Mild Cognitive Impairment (MCI) and Alzheimer’s Disease (AD). Our framework is based on a Convolutional Neural Network (CNN) trained and tested on functional Magnetic Resonance Images datasets. We contribute to the literature on AI-CAD frameworks for AD by using a 2D CNN for early diagnosis of MCI. Contrary to current efforts, we do not attempt to provide an AI-CAD framework that will replace clinicians, but one that can work in synergy with them. Our framework is cheaper and faster as it relies on small datasets without the need of high-performance computing infrastructures. Our work contributes to the literature on digital transformation of healthcare, health Information Systems, and NeuroIS, while it opens novel avenues for further research on the topic

    Operationalizing fairness in medical AI adoption: Detection of early Alzheimer’s Disease with 2D CNN

    Get PDF
    Objectives: To operationalize fairness in the adoption of medical artificial intelligence (AI) algorithms in terms of access to computational resources, the proposed approach is based on a two-dimensional (2D) Convolutional Neural Networks (CNN), which provides a faster, cheaper, and accurate-enough detection of early Alzheimer’s Disease (AD) and Mild Cognitive Impairment (MCI), without the need for use of large training datasets or costly high-performance computing (HPC) infrastructures. Methods: The standardized ADNI datasets are used for the proposed model, with additional skull stripping, using the BET2 approach. The 2D CNN architecture is based on LeNet-5, the LReLU activation function and a Sigmoid function were used, and batch normalization was added after every convolutional layer to stabilize the learning process. The model was optimized by manually tuning all its hyperparameters. Results: The model was evaluated in terms of accuracy, recall, precision, and f1-score. The results demonstrate that the model predicted MCI with an accuracy of .735, passing the random guessing baseline of .521, and predicted AD with an accuracy of .837, passing the random guessing baseline of .536. Discussion: The proposed approach can assist clinicians in the early diagnosis of AD and MCI, with high-enough accuracy, based on relatively smaller datasets, and without the need of HPC infrastructures. Such an approach can alleviate disparities and operationalize fairness in the adoption of medical algorithms. Conclusion: Medical AI algorithms should not be focused solely on accuracy but should also be evaluated with respect to how they might impact disparities and operationalize fairness in their adoption

    A novel deep learning based hippocampus subfield segmentation method

    Full text link
    [EN] The automatic assessment of hippocampus volume is an important tool in the study of several neurodegenerative diseases such as Alzheimer's disease. Specifically, the measurement of hippocampus subfields properties is of great interest since it can show earlier pathological changes in the brain. However, segmentation of these subfields is very difficult due to their complex structure and for the need of high-resolution magnetic resonance images manually labeled. In this work, we present a novel pipeline for automatic hippocampus subfield segmentation based on a deeply supervised convolutional neural network. Results of the proposed method are shown for two available hippocampus subfield delineation protocols. The method has been compared to other state-of-the-art methods showing improved results in terms of accuracy and execution time.This research was supported by the Spanish DPI2017-87743-R grant from the Ministerio de Economia, Industria y Competitividad of Spain. This study has been also carried out with financial support from the French State, managed by the French National Research Agency (ANR) in the frame of the Investments for the future Program IdEx Bordeaux (ANR-10-IDEX-03-02, HL-MRI Project) and Cluster of excellence CPU and TRAIL (HR-DTI ANR-10-LABX-57). The authors gratefully acknowledge the support of NVIDIA Corporation with their donation of the TITAN X GPU used in this research.Manjón Herrera, JV.; Romero, JE.; Coupe, P. (2022). A novel deep learning based hippocampus subfield segmentation method. Scientific Reports. 12(1):1-9. https://doi.org/10.1038/s41598-022-05287-81912

    Trustworthy Medical Segmentation with Uncertainty Estimation

    Get PDF
    Deep Learning (DL) holds great promise in reshaping the healthcare systems given its precision, efficiency, and objectivity. However, the brittleness of DL models to noisy and out-of-distribution inputs is ailing their deployment in the clinic. Most systems produce point estimates without further information about model uncertainty or confidence. This paper introduces a new Bayesian deep learning framework for uncertainty quantification in segmentation neural networks, specifically encoder-decoder architectures. The proposed framework uses the first-order Taylor series approximation to propagate and learn the first two moments (mean and covariance) of the distribution of the model parameters given the training data by maximizing the evidence lower bound. The output consists of two maps: the segmented image and the uncertainty map of the segmentation. The uncertainty in the segmentation decisions is captured by the covariance matrix of the predictive distribution. We evaluate the proposed framework on medical image segmentation data from Magnetic Resonances Imaging and Computed Tomography scans. Our experiments on multiple benchmark datasets demonstrate that the proposed framework is more robust to noise and adversarial attacks as compared to state-of-the-art segmentation models. Moreover, the uncertainty map of the proposed framework associates low confidence (or equivalently high uncertainty) to patches in the test input images that are corrupted with noise, artifacts or adversarial attacks. Thus, the model can self-assess its segmentation decisions when it makes an erroneous prediction or misses part of the segmentation structures, e.g., tumor, by presenting higher values in the uncertainty map

    SUPER-Net: Trustworthy Medical Image Segmentation with Uncertainty Propagation in Encoder-Decoder Networks

    Full text link
    Deep Learning (DL) holds great promise in reshaping the healthcare industry owing to its precision, efficiency, and objectivity. However, the brittleness of DL models to noisy and out-of-distribution inputs is ailing their deployment in the clinic. Most models produce point estimates without further information about model uncertainty or confidence. This paper introduces a new Bayesian DL framework for uncertainty quantification in segmentation neural networks: SUPER-Net: trustworthy medical image Segmentation with Uncertainty Propagation in Encoder-decodeR Networks. SUPER-Net analytically propagates, using Taylor series approximations, the first two moments (mean and covariance) of the posterior distribution of the model parameters across the nonlinear layers. In particular, SUPER-Net simultaneously learns the mean and covariance without expensive post-hoc Monte Carlo sampling or model ensembling. The output consists of two simultaneous maps: the segmented image and its pixelwise uncertainty map, which corresponds to the covariance matrix of the predictive distribution. We conduct an extensive evaluation of SUPER-Net on medical image segmentation of Magnetic Resonances Imaging and Computed Tomography scans under various noisy and adversarial conditions. Our experiments on multiple benchmark datasets demonstrate that SUPER-Net is more robust to noise and adversarial attacks than state-of-the-art segmentation models. Moreover, the uncertainty map of the proposed SUPER-Net associates low confidence (or equivalently high uncertainty) to patches in the test input images that are corrupted with noise, artifacts, or adversarial attacks. Perhaps more importantly, the model exhibits the ability of self-assessment of its segmentation decisions, notably when making erroneous predictions due to noise or adversarial examples

    Transfer learning for Alzheimer’s disease through neuroimaging biomarkers: A systematic review

    Get PDF
    Producción CientíficaAlzheimer’s disease (AD) is a remarkable challenge for healthcare in the 21st century. Since 2017, deep learning models with transfer learning approaches have been gaining recognition in AD detection, and progression prediction by using neuroimaging biomarkers. This paper presents a systematic review of the current state of early AD detection by using deep learning models with transfer learning and neuroimaging biomarkers. Five databases were used and the results before screening report 215 studies published between 2010 and 2020. After screening, 13 studies met the inclusion criteria. We noted that the maximum accuracy achieved to date for AD classification is 98.20% by using the combination of 3D convolutional networks and local transfer learning, and that for the prognostic prediction of AD is 87.78% by using pre-trained 3D convolutional network-based architectures. The results show that transfer learning helps researchers in developing a more accurate system for the early diagnosis of AD. However, there is a need to consider some points in future research, such as improving the accuracy of the prognostic prediction of AD, exploring additional biomarkers such as tau-PET and amyloid-PET to understand highly discriminative feature representation to separate similar brain patterns, managing the size of the datasets due to the limited availability.Ministerio de Industria, Energía y Turismo (AAL-20125036
    corecore