149 research outputs found

    Una revisión sistemática de métodos de aprendizaje profundo aplicados a imágenes oculares

    Get PDF
    Artificial intelligence is having an important effect on different areas of medicine, and ophthalmology has not been the exception. In particular, deep learning methods have been applied successfully to the detection of clinical signs and the classification of ocular diseases. This represents a great potential to increase the number of people correctly diagnosed. In ophthalmology, deep learning methods have primarily been applied to eye fundus images and optical coherence tomography. On the one hand, these methods have achieved an outstanding performance in the detection of ocular diseases such as: diabetic retinopathy, glaucoma, diabetic macular degeneration and age-related macular degeneration.  On the other hand, several worldwide challenges have shared big eye imaging datasets with segmentation of part of the eyes, clinical signs and the ocular diagnostic performed by experts. In addition, these methods are breaking the stigma of black-box models, with the delivering of interpretable clinically information. This review provides an overview of the state-of-the-art deep learning methods used in ophthalmic images, databases and potential challenges for ocular diagnosisLa inteligencia artificial está teniendo un importante impacto en diversas áreas de la medicina y a la oftalmología no ha sido la excepción. En particular, los métodos de aprendizaje profundo han sido aplicados con éxito en la detección de signos clínicos y la clasificación de enfermedades oculares. Esto representa un potencial impacto en el incremento de pacientes correctamente y oportunamente diagnosticados. En oftalmología, los métodos de aprendizaje profundo se han aplicado principalmente a imágenes de fondo de ojo y tomografía de coherencia óptica. Por un lado, estos métodos han logrado un rendimiento sobresaliente en la detección de enfermedades oculares tales como: retinopatía diabética, glaucoma, degeneración macular diabética y degeneración macular relacionada con la edad. Por otro lado, varios desafíos mundiales han compartido grandes conjuntos de datos con segmentación de parte de los ojos, signos clínicos y el diagnóstico ocular realizado por expertos. Adicionalmente, estos métodos están rompiendo el estigma de los modelos de caja negra, con la entrega de información clínica interpretable. Esta revisión proporciona una visión general de los métodos de aprendizaje profundo de última generación utilizados en imágenes oftálmicas, bases de datos y posibles desafíos para los diagnósticos oculare

    Deep learning analysis of eye fundus images to support medical diagnosis

    Get PDF
    Machine learning techniques have been successfully applied to support medical decision making of cancer, heart diseases and degenerative diseases of the brain. In particular, deep learning methods have been used for early detection of abnormalities in the eye that could improve the diagnosis of different ocular diseases, especially in developing countries, where there are major limitations to access to specialized medical treatment. However, the early detection of clinical signs such as blood vessel, optic disc alterations, exudates, hemorrhages, drusen, and microaneurysms presents three main challenges: the ocular images can be affected by noise artifact, the features of the clinical signs depend specifically on the acquisition source, and the combination of local signs and grading disease label is not an easy task. This research approaches the problem of combining local signs and global labels of different acquisition sources of medical information as a valuable tool to support medical decision making in ocular diseases. Different models for different eye diseases were developed. Four models were developed using eye fundus images: for DME, it was designed a two-stages model that uses a shallow model to predict an exudate binary mask. Then, the binary mask is stacked with the raw fundus image into a 4-channel array as an input of a deep convolutional neural network for diabetic macular edema diagnosis; for glaucoma, it was developed three deep learning models. First, it was defined a deep learning model based on three-stages that contains an initial stage for automatically segment two binary masks containing optic disc and physiological cup segmentation, followed by an automatic morphometric features extraction stage from previous segmentations, and a final classification stage that supports the glaucoma diagnosis with intermediate medical information. Two late-data-fusion methods that fused morphometric features from cartesian and polar segmentation of the optic disc and physiological cup with features extracted from raw eye fundus images. On the other hand, two models were defined using optical coherence tomography. First, a customized convolutional neural network termed as OCT-NET to extract features from OCT volumes to classify DME, DR-DME and AMD conditions. In addition, this model generates images with highlighted local information about the clinical signs, and it estimates the number of slides inside a volume with local abnormalities. Finally, a 3D-Deep learning model that uses OCT volumes as an input to estimate the retinal thickness map useful to grade AMD. The methods were systematically evaluated using ten free public datasets. The methods were compared and validated against other state-of-the-art algorithms and the results were also qualitatively evaluated by ophthalmology experts from Fundación Oftalmológica Nacional. In addition, the proposed methods were tested as a diagnosis support tool of diabetic macular edema, glaucoma, diabetic retinopathy and age-related macular degeneration using two different ocular imaging representations. Thus, we consider that this research could be potentially a big step in building telemedicine tools that could support medical personnel for detecting ocular diseases using eye fundus images and optical coherence tomography.Las técnicas de aprendizaje automático se han aplicado con éxito para apoyar la toma de decisiones médicas sobre el cáncer, las enfermedades cardíacas y las enfermedades degenerativas del cerebro. En particular, se han utilizado métodos de aprendizaje profundo para la detección temprana de anormalidades en el ojo que podrían mejorar el diagnóstico de diferentes enfermedades oculares, especialmente en países en desarrollo, donde existen grandes limitaciones para acceder a tratamiento médico especializado. Sin embargo, la detección temprana de signos clínicos como vasos sanguíneos, alteraciones del disco óptico, exudados, hemorragias, drusas y microaneurismas presenta tres desafíos principales: las imágenes oculares pueden verse afectadas por artefactos de ruido, las características de los signos clínicos dependen específicamente de fuente de adquisición, y la combinación de signos locales y clasificación de la enfermedad no es una tarea fácil. Esta investigación aborda el problema de combinar signos locales y etiquetas globales de diferentes fuentes de adquisición de información médica como una herramienta valiosa para apoyar la toma de decisiones médicas en enfermedades oculares. Se desarrollaron diferentes modelos para diferentes enfermedades oculares. Se desarrollaron cuatro modelos utilizando imágenes de fondo de ojo: para DME, se diseñó un modelo de dos etapas que utiliza un modelo superficial para predecir una máscara binaria de exudados. Luego, la máscara binaria se apila con la imagen de fondo de ojo original en una matriz de 4 canales como entrada de una red neuronal convolucional profunda para el diagnóstico de edema macular diabético; para el glaucoma, se desarrollaron tres modelos de aprendizaje profundo. Primero, se definió un modelo de aprendizaje profundo basado en tres etapas que contiene una etapa inicial para segmentar automáticamente dos máscaras binarias que contienen disco óptico y segmentación fisiológica de la copa, seguido de una etapa de extracción de características morfométricas automáticas de segmentaciones anteriores y una etapa de clasificación final que respalda el diagnóstico de glaucoma con información médica intermedia. Dos métodos de fusión de datos tardíos que fusionaron características morfométricas de la segmentación cartesiana y polar del disco óptico y la copa fisiológica con características extraídas de imágenes de fondo de ojo crudo. Por otro lado, se definieron dos modelos mediante tomografía de coherencia óptica. Primero, una red neuronal convolucional personalizada denominada OCT-NET para extraer características de los volúmenes OCT para clasificar las condiciones DME, DR-DME y AMD. Además, este modelo genera imágenes con información local resaltada sobre los signos clínicos, y estima el número de diapositivas dentro de un volumen con anomalías locales. Finalmente, un modelo de aprendizaje 3D-Deep que utiliza volúmenes OCT como entrada para estimar el mapa de espesor retiniano útil para calificar AMD. Los métodos se evaluaron sistemáticamente utilizando diez conjuntos de datos públicos gratuitos. Los métodos se compararon y validaron con otros algoritmos de vanguardia y los resultados también fueron evaluados cualitativamente por expertos en oftalmología de la Fundación Oftalmológica Nacional. Además, los métodos propuestos se probaron como una herramienta de diagnóstico de edema macular diabético, glaucoma, retinopatía diabética y degeneración macular relacionada con la edad utilizando dos representaciones de imágenes oculares diferentes. Por lo tanto, consideramos que esta investigación podría ser potencialmente un gran paso en la construcción de herramientas de telemedicina que podrían ayudar al personal médico a detectar enfermedades oculares utilizando imágenes de fondo de ojo y tomografía de coherencia óptica.Doctorad

    Methods to Improve the Prediction Accuracy and Performance of Ensemble Models

    Get PDF
    The application of ensemble predictive models has been an important research area in predicting medical diagnostics, engineering diagnostics, and other related smart devices and related technologies. Most of the current predictive models are complex and not reliable despite numerous efforts in the past by the research community. The performance accuracy of the predictive models have not always been realised due to many factors such as complexity and class imbalance. Therefore there is a need to improve the predictive accuracy of current ensemble models and to enhance their applications and reliability and non-visual predictive tools. The research work presented in this thesis has adopted a pragmatic phased approach to propose and develop new ensemble models using multiple methods and validated the methods through rigorous testing and implementation in different phases. The first phase comprises of empirical investigations on standalone and ensemble algorithms that were carried out to ascertain their performance effects on complexity and simplicity of the classifiers. The second phase comprises of an improved ensemble model based on the integration of Extended Kalman Filter (EKF), Radial Basis Function Network (RBFN) and AdaBoost algorithms. The third phase comprises of an extended model based on early stop concepts, AdaBoost algorithm, and statistical performance of the training samples to minimize overfitting performance of the proposed model. The fourth phase comprises of an enhanced analytical multivariate logistic regression predictive model developed to minimize the complexity and improve prediction accuracy of logistic regression model. To facilitate the practical application of the proposed models; an ensemble non-invasive analytical tool is proposed and developed. The tool links the gap between theoretical concepts and practical application of theories to predict breast cancer survivability. The empirical findings suggested that: (1) increasing the complexity and topology of algorithms does not necessarily lead to a better algorithmic performance, (2) boosting by resampling performs slightly better than boosting by reweighting, (3) the prediction accuracy of the proposed ensemble EKF-RBFN-AdaBoost model performed better than several established ensemble models, (4) the proposed early stopped model converges faster and minimizes overfitting better compare with other models, (5) the proposed multivariate logistic regression concept minimizes the complexity models (6) the performance of the proposed analytical non-invasive tool performed comparatively better than many of the benchmark analytical tools used in predicting breast cancers and diabetics ailments. The research contributions to ensemble practice are: (1) the integration and development of EKF, RBFN and AdaBoost algorithms as an ensemble model, (2) the development and validation of ensemble model based on early stop concepts, AdaBoost, and statistical concepts of the training samples, (3) the development and validation of predictive logistic regression model based on breast cancer, and (4) the development and validation of a non-invasive breast cancer analytic tools based on the proposed and developed predictive models in this thesis. To validate prediction accuracy of ensemble models, in this thesis the proposed models were applied in modelling breast cancer survivability and diabetics’ diagnostic tasks. In comparison with other established models the simulation results of the models showed improved predictive accuracy. The research outlines the benefits of the proposed models, whilst proposes new directions for future work that could further extend and improve the proposed models discussed in this thesis

    Prediction Of Heart Failure Decompensations Using Artificial Intelligence - Machine Learning Techniques

    Get PDF
    Los apartados 4.41, 4.4.2 y 4.4.3 del capítulo 4 están sujetos a confidencialidad por la autora. 203 p.Heart failure (HF) is a major concern in public health. Its total impact is increased by its high incidence and prevalence and its unfavourable medium-term prognosis. In addition, HF leads to huge health care resource consumption. Moreover, efforts to develop a deterministic understanding of rehospitalization have been difficult, as no specific patient or hospital factors have been shown to consistently predict 30-day readmission after hospitalization for HF.Taking all these facts into account, we wanted to develop a project to improve the assistance care of patients with HF. Up to know, we were using telemonitoring with a codification system that generated alarms depending on the received values. However, these simple rules generated large number of false alerts being, hence, not trustworthy. The final aims of this work are: (i) asses the benefits of remote patient telemonitoring (RPT), (ii) improve the results obtained with RPT using ML techniques, detecting which parameters measured by telemonitoring best predict HF decompensations and creating predictive models that will reduce false alerts and detect early decompensations that otherwise will lead to hospital admissions and (iii) determine the influence of environmental factors on HF decompensations.All in all, the conclusions of this study are:1. Asses the benefits of RPT: Telemonitoring has not shown a statistically significant reduction in the number of HF-related hospital admissions. Nevertheless, we have observed a statistically significant reduction in mortality in the intervention group with a considerable percentage of deaths from non-cardiovascular causes. Moreover, patients have considered the RPT programme as a tool that can help them in the control of their chronic disease and in the relationship with health professionals.2. Improve the results obtained with RPT using machine learning techniques: Significant weight increases, desaturation below 90%, perception of clinical worsening, including development of oedema, worsening of functional class and orthopnoea are good predictors of heart failure decompensation. In addition, machine learning techniques have improved the current alerts system implemented in our hospital. The system reduces the number of false alerts notably although it entails a decrement on sensitivity values. The best results are achieved with the predictive model built by applying NB with Bernoulli to the combination of telemonitoring alerts and questionnaire alerts (Weight + Ankle + well-being plus the yellow alerts of systolic blood pressure, diastolic blood pressure, O2Sat and heart rate). 3. Determine the influence of environmental factors on HF decompensations: Air temperature is the most significant environmental factor (negative correlation) in our study, although some other attributes, such as precipitation, are also relevant. This work also shows a consistent association between increasing levels SO2 and NOX air and HF hospitalizations

    Prediction Of Heart Failure Decompensations Using Artificial Intelligence - Machine Learning Techniques

    Get PDF
    Los apartados 4.41, 4.4.2 y 4.4.3 del capítulo 4 están sujetos a confidencialidad por la autora. 203 p.Heart failure (HF) is a major concern in public health. Its total impact is increased by its high incidence and prevalence and its unfavourable medium-term prognosis. In addition, HF leads to huge health care resource consumption. Moreover, efforts to develop a deterministic understanding of rehospitalization have been difficult, as no specific patient or hospital factors have been shown to consistently predict 30-day readmission after hospitalization for HF.Taking all these facts into account, we wanted to develop a project to improve the assistance care of patients with HF. Up to know, we were using telemonitoring with a codification system that generated alarms depending on the received values. However, these simple rules generated large number of false alerts being, hence, not trustworthy. The final aims of this work are: (i) asses the benefits of remote patient telemonitoring (RPT), (ii) improve the results obtained with RPT using ML techniques, detecting which parameters measured by telemonitoring best predict HF decompensations and creating predictive models that will reduce false alerts and detect early decompensations that otherwise will lead to hospital admissions and (iii) determine the influence of environmental factors on HF decompensations.All in all, the conclusions of this study are:1. Asses the benefits of RPT: Telemonitoring has not shown a statistically significant reduction in the number of HF-related hospital admissions. Nevertheless, we have observed a statistically significant reduction in mortality in the intervention group with a considerable percentage of deaths from non-cardiovascular causes. Moreover, patients have considered the RPT programme as a tool that can help them in the control of their chronic disease and in the relationship with health professionals.2. Improve the results obtained with RPT using machine learning techniques: Significant weight increases, desaturation below 90%, perception of clinical worsening, including development of oedema, worsening of functional class and orthopnoea are good predictors of heart failure decompensation. In addition, machine learning techniques have improved the current alerts system implemented in our hospital. The system reduces the number of false alerts notably although it entails a decrement on sensitivity values. The best results are achieved with the predictive model built by applying NB with Bernoulli to the combination of telemonitoring alerts and questionnaire alerts (Weight + Ankle + well-being plus the yellow alerts of systolic blood pressure, diastolic blood pressure, O2Sat and heart rate). 3. Determine the influence of environmental factors on HF decompensations: Air temperature is the most significant environmental factor (negative correlation) in our study, although some other attributes, such as precipitation, are also relevant. This work also shows a consistent association between increasing levels SO2 and NOX air and HF hospitalizations

    Biomedical Applications of Mid-Infrared Spectroscopic Imaging and Multivariate Data Analysis: Contribution to the Understanding of Diabetes Pathogenesis

    Get PDF
    Diabetic retinopathy (DR) is a microvascular complication of diabetes and a leading cause of adult vision loss. Although a great deal of progress has been made in ophthalmological examinations and clinical approaches to detect the signs of retinopathy in patients with diabetes, there still remain outstanding questions regarding the molecular and biochemical changes involved. To discover the biochemical mechanisms underlying the development and progression of changes in the retina as a result of diabetes, a more comprehensive understanding of the bio-molecular processes, in individual retinal cells subjected to hyperglycemia, is required. Animal models provide a suitable resource for temporal detection of the underlying pathophysiological and biochemical changes associated with DR, which is not fully attainable in human studies. In the present study, I aimed to determine the nature of diabetes-induced, highly localized biochemical changes in the retinal tissue from Ins2Akita/+ (Akita/+; a model of Type I diabetes) male mice with different duration of diabetes. Employing label-free, spatially resolved Fourier transform infrared (FT-IR) imaging engaged with chemometric tools enabled me to identify temporal-dependent reproducible biomarkers of the diabetic retinal tissue from mice with 6 or 12 weeks, and 6 or 10 months of diabetes. I report, for the first time, the origin of molecular changes in the biochemistry of individual retinal layers with different duration of diabetes. A robust classification between distinctive retinal layers - namely photoreceptor layer (PRL), outer plexiform layer (OPL), inner nuclear layer (INL), and inner plexiform layer (IPL) - and associated temporal-dependent spectral biomarkers, were delineated. Spatially-resolved super resolution chemical images revealed oxidative stress-induced structural and morphological alterations within the nucleus of the photoreceptors. Comparison among the PRL, OPL, INL, and IPL suggested that the photoreceptor layer is the most susceptible layer to the oxidative stress with short-duration of diabetes. Moreover, for the first time, we present the temporal-dependent molecular alterations for the PRL, OPL, INL, and IPL from Akita/+ mice, with progression of diabetes. These findings are potentially important and may be of particular benefit in understanding the molecular and biological activity of retinal cells during oxidative stress in diabetes. Our integrating paradigm provides a new conceptual framework and a significant rationale for a better understanding of the molecular and cellular mechanisms underlying the development and progression of DR. This approach may yield alternative and potentially complimentary methods for the assessment of diabetes changes. It is expected that the conclusions drawn from this work will bridge the gap in our knowledge regarding the biochemical mechanisms of the DR and address some critical needs in the biomedical community

    Non-communicable Diseases, Big Data and Artificial Intelligence

    Get PDF
    This reprint includes 15 articles in the field of non-communicable Diseases, big data, and artificial intelligence, overviewing the most recent advances in the field of AI and their application potential in 3P medicine

    Contribuciones de las técnicas machine learning a la cardiología. Predicción de reestenosis tras implante de stent coronario

    Get PDF
    [ES]Antecedentes: Existen pocos temas de actualidad equiparables a la posibilidad de la tecnología actual para desarrollar las mismas capacidades que el ser humano, incluso en medicina. Esta capacidad de simular los procesos de inteligencia humana por parte de máquinas o sistemas informáticos es lo que conocemos hoy en día como inteligencia artificial. Uno de los campos de la inteligencia artificial con mayor aplicación a día de hoy en medicina es el de la predicción, recomendación o diagnóstico, donde se aplican las técnicas machine learning. Asimismo, existe un creciente interés en las técnicas de medicina de precisión, donde las técnicas machine learning pueden ofrecer atención médica individualizada a cada paciente. El intervencionismo coronario percutáneo (ICP) con stent se ha convertido en una práctica habitual en la revascularización de los vasos coronarios con enfermedad aterosclerótica obstructiva significativa. El ICP es asimismo patrón oro de tratamiento en pacientes con infarto agudo de miocardio; reduciendo las tasas de muerte e isquemia recurrente en comparación con el tratamiento médico. El éxito a largo plazo del procedimiento está limitado por la reestenosis del stent, un proceso patológico que provoca un estrechamiento arterial recurrente en el sitio de la ICP. Identificar qué pacientes harán reestenosis es un desafío clínico importante; ya que puede manifestarse como un nuevo infarto agudo de miocardio o forzar una nueva resvascularización del vaso afectado, y que en casos de reestenosis recurrente representa un reto terapéutico. Objetivos: Después de realizar una revisión de las técnicas de inteligencia artificial aplicadas a la medicina y con mayor profundidad, de las técnicas machine learning aplicadas a la cardiología, el objetivo principal de esta tesis doctoral ha sido desarrollar un modelo machine learning para predecir la aparición de reestenosis en pacientes con infarto agudo de miocardio sometidos a ICP con implante de un stent. Asimismo, han sido objetivos secundarios comparar el modelo desarrollado con machine learning con los scores clásicos de riesgo de reestenosis utilizados hasta la fecha; y desarrollar un software que permita trasladar esta contribución a la práctica clínica diaria de forma sencilla. Para desarrollar un modelo fácilmente aplicable, realizamos nuestras predicciones sin variables adicionales a las obtenidas en la práctica rutinaria. Material: El conjunto de datos, obtenido del ensayo GRACIA-3, consistió en 263 pacientes con características demográficas, clínicas y angiográficas; 23 de ellos presentaron reestenosis a los 12 meses después de la implantación del stent. Todos los desarrollos llevados a cabo se han hecho en Python y se ha utilizado computación en la nube, en concreto AWS (Amazon Web Services). Metodología: Se ha utilizado una metodología para trabajar con conjuntos de datos pequeños y no balanceados, siendo importante el esquema de validación cruzada anidada utilizado, así como la utilización de las curvas PR (precision-recall, exhaustividad-sensibilidad), además de las curvas ROC, para la interpretación de los modelos. Se han entrenado los algoritmos más habituales en la literatura para elegir el que mejor comportamiento ha presentado. Resultados: El modelo con mejores resultados ha sido el desarrollado con un clasificador extremely randomized trees; que superó significativamente (0,77; área bajo la curva ROC a los tres scores clínicos clásicos; PRESTO-1 (0,58), PRESTO-2 (0,58) y TLR (0,62). Las curvas exhaustividad sensibilidad ofrecieron una imagen más precisa del rendimiento del modelo extremely randomized trees que muestra un algoritmo eficiente (0,96) para no reestenosis, con alta exhaustividad y alta sensibilidad. Para un umbral considerado óptimo, de 1,000 pacientes sometidos a implante de stent, nuestro modelo machine learning predeciría correctamente 181 (18%) más casos en comparación con el mejor score de riesgo clásico (TLR). Las variables más importantes clasificadas según su contribución a las predicciones fueron diabetes, enfermedad coronaria en 2 ó más vasos, flujo TIMI post-ICP, plaquetas anormales, trombo post-ICP y colesterol anormal. Finalmente, se ha desarrollado una calculadora para trasladar el modelo a la práctica clínica. La calculadora permite estimar el riesgo individual de cada paciente y situarlo en una zona de riesgo, facilitando la toma de decisión al médico en cuanto al seguimiento adecuado para el mismo. Conclusiones: Aplicado inmediatamente después de la implantación del stent, un modelo machine learning diferencia mejor a aquellos pacientes que presentarán o no reestenosis respecto a los discriminadores clásicos actuales

    Recent publications from the Alzheimer's Disease Neuroimaging Initiative: Reviewing progress toward improved AD clinical trials

    Get PDF
    INTRODUCTION: The Alzheimer's Disease Neuroimaging Initiative (ADNI) has continued development and standardization of methodologies for biomarkers and has provided an increased depth and breadth of data available to qualified researchers. This review summarizes the over 400 publications using ADNI data during 2014 and 2015. METHODS: We used standard searches to find publications using ADNI data. RESULTS: (1) Structural and functional changes, including subtle changes to hippocampal shape and texture, atrophy in areas outside of hippocampus, and disruption to functional networks, are detectable in presymptomatic subjects before hippocampal atrophy; (2) In subjects with abnormal β-amyloid deposition (Aβ+), biomarkers become abnormal in the order predicted by the amyloid cascade hypothesis; (3) Cognitive decline is more closely linked to tau than Aβ deposition; (4) Cerebrovascular risk factors may interact with Aβ to increase white-matter (WM) abnormalities which may accelerate Alzheimer's disease (AD) progression in conjunction with tau abnormalities; (5) Different patterns of atrophy are associated with impairment of memory and executive function and may underlie psychiatric symptoms; (6) Structural, functional, and metabolic network connectivities are disrupted as AD progresses. Models of prion-like spreading of Aβ pathology along WM tracts predict known patterns of cortical Aβ deposition and declines in glucose metabolism; (7) New AD risk and protective gene loci have been identified using biologically informed approaches; (8) Cognitively normal and mild cognitive impairment (MCI) subjects are heterogeneous and include groups typified not only by "classic" AD pathology but also by normal biomarkers, accelerated decline, and suspected non-Alzheimer's pathology; (9) Selection of subjects at risk of imminent decline on the basis of one or more pathologies improves the power of clinical trials; (10) Sensitivity of cognitive outcome measures to early changes in cognition has been improved and surrogate outcome measures using longitudinal structural magnetic resonance imaging may further reduce clinical trial cost and duration; (11) Advances in machine learning techniques such as neural networks have improved diagnostic and prognostic accuracy especially in challenges involving MCI subjects; and (12) Network connectivity measures and genetic variants show promise in multimodal classification and some classifiers using single modalities are rivaling multimodal classifiers. DISCUSSION: Taken together, these studies fundamentally deepen our understanding of AD progression and its underlying genetic basis, which in turn informs and improves clinical trial desig
    corecore