8 research outputs found

    Machine Learning Classification of Alzheimer's Disease Stages Using Cerebrospinal Fluid Biomarkers Alone

    Full text link
    Early diagnosis of Alzheimer's disease is a challenge because the existing methodologies do not identify the patients in their preclinical stage, which can last up to a decade prior to the onset of clinical symptoms. Several research studies demonstrate the potential of cerebrospinal fluid biomarkers, amyloid beta 1-42, T-tau, and P-tau, in early diagnosis of Alzheimer's disease stages. In this work, we used machine learning models to classify different stages of Alzheimer's disease based on the cerebrospinal fluid biomarker levels alone. An electronic health record of patients from the National Alzheimer's Coordinating Centre database was analyzed and the patients were subdivided based on mini-mental state scores and clinical dementia ratings. Statistical and correlation analyses were performed to identify significant differences between the Alzheimer's stages. Afterward, machine learning classifiers including K-Nearest Neighbors, Ensemble Boosted Tree, Ensemble Bagged Tree, Support Vector Machine, Logistic Regression, and Naive Bayes classifiers were employed to classify the Alzheimer's disease stages. The results demonstrate that Ensemble Boosted Tree (84.4%) and Logistic Regression (73.4%) provide the highest accuracy for binary classification, while Ensemble Bagged Tree (75.4%) demonstrates better accuracy for multiclassification. The findings from this research are expected to help clinicians in making an informed decision regarding the early diagnosis of Alzheimer's from the cerebrospinal fluid biomarkers alone, monitoring of the disease progression, and implementation of appropriate intervention measures

    Sistema de Diagnostico del Alzheimer basado en imágenes de resonancia magnética mediante el algoritmo VGG16

    Get PDF
    Early diagnosis of Alzheimer's disease is essential to provide timely treatment to patients. In this regard, a system for diagnosing Alzheimer's disease based on magnetic resonance imaging and utilizing a convolutional neural network algorithm called VGG16, has been developed. Magnetic resonance images of patients with and without Alzheimer's disease were collected and processed. These images were used to train the algorithm, which learned to identify and associate patterns with the disease. Subsequently, tests were performed with a set of unseen images to evaluate the diagnostic ability of the system. Through the analysis of magnetic resonance images, the VGG16 algorithm has shown a capacity of over 82% to correctly recognize these signs. These results validate the effectiveness of the artificial intelligence-based approach for diagnosing Alzheimer's disease.El diagnóstico temprano del Alzheimer es fundamental para brindar un tratamiento oportuno a los pacientes. En este sentido se ha desarrollado un sistema de diagnóstico del Alzheimer basado en imágenes de resonancia magnética que utiliza un algoritmo de redes neuronales convolucionales denominado VGG16. Se recopilaron y procesaron imágenes de resonancia magnética de pacientes con y sin Alzheimer. Estas imágenes se utilizaron para entrenar al algoritmo, el cual aprendió a identificar y asociar patrones con la enfermedad. Posteriormente, se realizaron pruebas con un conjunto de imágenes no vistas para evaluar la capacidad de diagnóstico del sistema. Mediante el análisis de las imágenes de resonancia magnética, el algoritmo VGG16 ha demostrado una capacidad superior al 82% para reconocer correctamente dichos signos. Estos resultados validan la efectividad del enfoque basado en inteligencia artificial para el diagnóstico del Alzheimer

    A Novel Hybrid Ordinal Learning Model with Health Care Application

    Full text link
    Ordinal learning (OL) is a type of machine learning models with broad utility in health care applications such as diagnosis of different grades of a disease (e.g., mild, modest, severe) and prediction of the speed of disease progression (e.g., very fast, fast, moderate, slow). This paper aims to tackle a situation when precisely labeled samples are limited in the training set due to cost or availability constraints, whereas there could be an abundance of samples with imprecise labels. We focus on imprecise labels that are intervals, i.e., one can know that a sample belongs to an interval of labels but cannot know which unique label it has. This situation is quite common in health care datasets due to limitations of the diagnostic instrument, sparse clinical visits, or/and patient dropout. Limited research has been done to develop OL models with imprecise/interval labels. We propose a new Hybrid Ordinal Learner (HOL) to integrate samples with both precise and interval labels to train a robust OL model. We also develop a tractable and efficient optimization algorithm to solve the HOL formulation. We compare HOL with several recently developed OL methods on four benchmarking datasets, which demonstrate the superior performance of HOL. Finally, we apply HOL to a real-world dataset for predicting the speed of progressing to Alzheimer's Disease (AD) for individuals with Mild Cognitive Impairment (MCI) based on a combination of multi-modality neuroimaging and demographic/clinical datasets. HOL achieves high accuracy in the prediction and outperforms existing methods. The capability of accurately predicting the speed of progression to AD for each individual with MCI has the potential for helping facilitate more individually-optimized interventional strategies.Comment: 16 pages, 3 figures, 2 table

    Self-calibrated brain network estimation and joint non-convex multi-task learning for identification of early Alzheimer's disease

    Get PDF
    Detection of early stages of Alzheimer's disease (AD) (i.e., mild cognitive impairment (MCI)) is important to maximize the chances to delay or prevent progression to AD. Brain connectivity networks inferred from medical imaging data have been commonly used to distinguish MCI patients from normal controls (NC). However, existing methods still suffer from limited performance, and classification remains mainly based on single modality data. This paper proposes a new model to automatically diagnosing MCI (early MCI (EMCI) and late MCI (LMCI)) and its earlier stages (i.e., significant memory concern (SMC)) by combining low-rank self-calibrated functional brain networks and structural brain networks for joint multi-task learning. Specifically, we first develop a new functional brain network estimation method. We introduce data quality indicators for self-calibration, which can improve data quality while completing brain network estimation, and perform correlation analysis combined with low-rank structure. Second, functional and structural connected neuroimaging patterns are integrated into our multi-task learning model to select discriminative and informative features for fine MCI analysis. Different modalities are best suited to undertake distinct classification tasks, and similarities and differences among multiple tasks are best determined through joint learning to determine most discriminative features. The learning process is completed by non-convex regularizer, which effectively reduces the penalty bias of trace norm and approximates the original rank minimization problem. Finally, the most relevant disease features classified using a support vector machine (SVM) for MCI identification. Experimental results show that our method achieves promising performance with high classification accuracy and can effectively discriminate between different sub-stages of MCI

    RGB-D Salient Object Detection: A Survey

    Full text link
    Salient object detection (SOD), which simulates the human visual perception system to locate the most attractive object(s) in a scene, has been widely applied to various computer vision tasks. Now, with the advent of depth sensors, depth maps with affluent spatial information that can be beneficial in boosting the performance of SOD, can easily be captured. Although various RGB-D based SOD models with promising performance have been proposed over the past several years, an in-depth understanding of these models and challenges in this topic remains lacking. In this paper, we provide a comprehensive survey of RGB-D based SOD models from various perspectives, and review related benchmark datasets in detail. Further, considering that the light field can also provide depth maps, we review SOD models and popular benchmark datasets from this domain as well. Moreover, to investigate the SOD ability of existing models, we carry out a comprehensive evaluation, as well as attribute-based evaluation of several representative RGB-D based SOD models. Finally, we discuss several challenges and open directions of RGB-D based SOD for future research. All collected models, benchmark datasets, source code links, datasets constructed for attribute-based evaluation, and codes for evaluation will be made publicly available at https://github.com/taozh2017/RGBDSODsurveyComment: 24 pages, 12 figures. Has been accepted by Computational Visual Medi
    corecore