412 research outputs found

    A Knowledge-based Integrative Modeling Approach for <em>In-Silico</em> Identification of Mechanistic Targets in Neurodegeneration with Focus on Alzheimer’s Disease

    Get PDF
    Dementia is the progressive decline in cognitive function due to damage or disease in the body beyond what might be expected from normal aging. Based on neuropathological and clinical criteria, dementia includes a spectrum of diseases, namely Alzheimer's dementia, Parkinson's dementia, Lewy Body disease, Alzheimer's dementia with Parkinson's, Pick's disease, Semantic dementia, and large and small vessel disease. It is thought that these disorders result from a combination of genetic and environmental risk factors. Despite accumulating knowledge that has been gained about pathophysiological and clinical characteristics of the disease, no coherent and integrative picture of molecular mechanisms underlying neurodegeneration in Alzheimer’s disease is available. Existing drugs only offer symptomatic relief to the patients and lack any efficient disease-modifying effects. The present research proposes a knowledge-based rationale towards integrative modeling of disease mechanism for identifying potential candidate targets and biomarkers in Alzheimer’s disease. Integrative disease modeling is an emerging knowledge-based paradigm in translational research that exploits the power of computational methods to collect, store, integrate, model and interpret accumulated disease information across different biological scales from molecules to phenotypes. It prepares the ground for transitioning from ‘descriptive’ to “mechanistic” representation of disease processes. The proposed approach was used to introduce an integrative framework, which integrates, on one hand, extracted knowledge from the literature using semantically supported text-mining technologies and, on the other hand, primary experimental data such as gene/protein expression or imaging readouts. The aim of such a hybrid integrative modeling approach was not only to provide a consolidated systems view on the disease mechanism as a whole but also to increase specificity and sensitivity of the mechanistic model by providing disease-specific context. This approach was successfully used for correlating clinical manifestations of the disease to their corresponding molecular events and led to the identification and modeling of three important mechanistic components underlying Alzheimer’s dementia, namely the CNS, the immune system and the endocrine components. These models were validated using a novel in-silico validation method, namely biomarker-guided pathway analysis and a pathway-based target identification approach was introduced, which resulted in the identification of the MAPK signaling pathway as a potential candidate target at the crossroad of the triad components underlying disease mechanism in Alzheimer’s dementia

    Multimodal and multiscale brain networks : understanding aging, Alzheimer’s disease, and other neurodegenerative disorders

    Get PDF
    The human brain can be modeled as a complex network, often referred to as the connectome, where structural and functional connections govern its organization. Several neuroimaging studies have focused on understanding the architecture of healthy brain networks and have shed light on how these networks evolve with age and in the presence of neurodegenerative disorders. Many studies have explored the brain networks in Alzheimer’s disease (AD), the most common type of dementia, using various neuroimaging modalities independently. However, most of these studies ignored the complex and multifactorial nature of AD. The aim of this thesis was to investigate and analyze the brain’s multimodal and multiscale network organization in aging and in AD by using different multilayer brain network analyses and different types of data. Additionally, this research extended its scope to incorporate other dementias, such as Lewy body dementias, allowing for a comparison of these disorders with AD and normal aging. These comparisons were made possible through the application of protein co-expression networks. In Study I, we investigated sex differences in healthy individuals using multimodal brain networks. To do this we used resting-state functional magnetic resonance imaging (rs-fMRI) and diffusion-weighted imaging (DWI) data from the Human Connectome Project (HCP) to perform multilayer and deep learning analyses. These analyses identified differences between men's and women's underlying brain network organization, showing that the deep-learning analysis with multilayer network metrics (area under the curve, AUC, of 0.81) outperforms the classification using single-layer network measures (AUC of 0.72 for functional networks and 0.70 for anatomical networks). Furthermore, we integrated the multilayer brain networks methodology and neural network models into a software package that is easy to use by researchers with different backgrounds and is also easily expandable for researchers with different levels of programming experience. Then, we used the multilayer brain networks methodology to study the interaction between sex and age on the functional network topology using a large group of people from the UK Biobank (Study II). By incorporating multilayer brain network analyses, we analyzed both positive and negative connections derived from functional correlations, and we obtained important insights into how cognitive abilities, physical health, and even genetic factors differ between men and women as they age. Age and sex were strongly associated with multiplex and multilayer measures such as the multiplex participation coefficient, multilayer clustering, and multilayer global efficiency, accounting for up to 89.1%, 79.9%, and 79.5% of the variance related to age, respectively. These results indicate that incorporating separate layers for positive and negative connections within a complex network framework reveals sensitive insights into age- and sex-related variations that are not detected by traditional metrics. Furthermore, our functional metrics exhibited associations with genes that have previously been linked to processes related to aging. In Study III, we assessed whether multilayer connectome analyses could offer new perspectives on the relationship between amyloid pathology and gray matter atrophy across the AD continuum. Subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) were divided into four groups based on cerebrospinal fluid (CSF) amyloid-β (Aβ) biomarker levels and clinical diagnosis. We compared the different groups using weighted and binary multilayer measures that assess the strength of the connections, the modularity, as well as the multiplex segregation and integration of the brain connectomes. Across Aβ-positive (Aβ+) groups, we found widespread increases in the overlapping connectivity strength and decreases in the number of identical connections in both layers. Moreover, the brain modules were reorganized in the mild cognitive impairment (MCI) Aβ+ group and an imbalance in the quantity of couplings between the two layers was found in patients with MCI Aβ+ and AD Aβ+. Using a subsample from the same database, ADNI, we analyzed rs-fMRI data from individuals at preclinical and clinical stages of AD (Study IV). By dividing the time series into different time windows, we built temporal multilayer networks and studied the modular organization across time. We were able to capture the dynamic changes across different AD stages using this temporal multilayer network approach, obtaining outstanding areas under the curve of 0.90, 0.92 and 0.99 in the distinction of controls from preclinical, prodromal, and clinical AD stages, respectively, on top and beyond common risk factors. Our results not only improved the discrimination between various disease stages but, importantly, they also showed that dynamic multilayer functional measures are associated with memory and global cognition in addition to amyloid and tau load derived from positron emission tomography. These results highlight the potential of dynamic multilayer functional connectivity measures as functional biomarkers of AD progression. In Study V, we used in-depth quantitative proteomics to compare post-mortem brains from three key brain regions (prefrontal cortex, cingulate cortex, and the parietal cortex) directly related to the disease mechanisms of AD, Parkinson’s disease with dementia (PDD), dementia with Lewy bodies (DLB) in prospectively followed patients and older adults without dementia. We used covariance weighted networks to find modules of protein sets to further understand altered pathways in these dementias and their implications for prognostic and diagnostic purposes. In conclusion, this thesis explored the complex world of brain networks and offered insightful information about how age, sex, and AD influence these networks. We have improved our understanding of how the brain is organized in different imaging modalities and different time scales, as well as developing software tools to make this methodology available to more researchers. Additionally, we assessed the connections among various proteins in different areas of the brain in relation to health, Alzheimer's disease, and Lewy body dementias. This work contributes to the collective effort of unraveling the mysteries of the human brain organization and offers a foundation for future research to understand brain networks in health and disease

    Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data

    Full text link
    Abstract Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be ‘team science’.http://deepblue.lib.umich.edu/bitstream/2027.42/134522/1/13742_2016_Article_117.pd

    Advancing Precision Medicine: Unveiling Disease Trajectories, Decoding Biomarkers, and Tailoring Individual Treatments

    Get PDF
    Chronic diseases are not only prevalent but also exert a considerable strain on the healthcare system, individuals, and communities. Nearly half of all Americans suffer from at least one chronic disease, which is still growing. The development of machine learning has brought new directions to chronic disease analysis. Many data scientists have devoted themselves to understanding how a disease progresses over time, which can lead to better patient management, identification of disease stages, and targeted interventions. However, due to the slow progression of chronic disease, symptoms are barely noticed until the disease is advanced, challenging early detection. Meanwhile, chronic diseases often have diverse underlying causes and can manifest differently among patients. Besides the external factors, the development of chronic disease is also influenced by internal signals. The DNA sequence-level differences have been proven responsible for constant predisposition to chronic diseases. Given these challenges, data must be analyzed at various scales, ranging from single nucleotide polymorphisms (SNPs) to individuals and populations, to better understand disease mechanisms and provide precision medicine. Therefore, this research aimed to develop an automated pipeline from building predictive models and estimating individual treatment effects based on the structured data of general electronic health records (EHRs) to identifying genetic variations (e.g., SNPs) associated with diseases to unravel the genetic underpinnings of chronic diseases. First, we used structured EHRs to uncover chronic disease progression patterns and assess the dynamic contribution of clinical features. In this step, we employed causal inference methods (constraint-based and functional causal models) for feature selection and utilized Markov chains, attention long short-term memory (LSTM), and Gaussian process (GP). SHapley Additive exPlanations (SHAPs) and local interpretable model-agnostic explanations (LIMEs) further extended the work to identify important clinical features. Next, I developed a novel counterfactual-based method to predict individual treatment effects (ITE) from observational data. To discern a “balanced” representation so that treated and control distributions look similar, we disentangled the doctor’s preference from the covariance and rebuilt the representation of the treated and control groups. We use integral probability metrics to measure distances between distributions. The expected ITE estimation error of a representation was the sum of the standard generalization error of that representation and the distance between the distributions induced. Finally, we performed genome-wide association studies (GWAS) based on the stage information we extracted from our unsupervised disease progression model to identify the biomarkers and explore the genetic correction between the disease and its phenotypes

    Investigation of Multi-dimensional Tensor Multi-task Learning for Modeling Alzheimer's Disease Progression

    Get PDF
    Machine learning (ML) techniques for predicting Alzheimer's disease (AD) progression can significantly assist clinicians and researchers in constructing effective AD prevention and treatment strategies. The main constraints on the performance of current ML approaches are prediction accuracy and stability problems in medical small dataset scenarios, monotonic data formats (loss of multi-dimensional knowledge of the data and loss of correlation knowledge between biomarkers) and biomarker interpretability limitations. This thesis investigates how multi-dimensional information and knowledge from biomarker data integrated with multi-task learning approaches to predict AD progression. Firstly, a novel similarity-based quantification approach is proposed with two components: multi-dimensional knowledge vector construction and amalgamated magnitude-direction quantification of brain structural variation, which considers both the magnitude and directional correlations of structural variation between brain biomarkers and encodes the quantified data as a third-order tensor to address the problem of monotonic data form. Secondly, multi-task learning regression algorithms with the ability to integrate multi-dimensional tensor data and mine MRI data for spatio-temporal structural variation information and knowledge were designed and constructed to improve the accuracy, stability and interpretability of AD progression prediction in medical small dataset scenarios. The algorithm consists of three components: supervised symmetric tensor decomposition for extracting biomarker latent factors, tensor multi-task learning regression and algorithmic regularisation terms. The proposed algorithm aims to extract a set of first-order latent factors from the raw data, each represented by its first biomarker, second biomarker and patient sample dimensions, to elucidate potential factors affecting the variability of the data in an interpretable manner and can be utilised as predictor variables for training the prediction model that regards the prediction of each patient as a task, with each task sharing a set of biomarker latent factors obtained from tensor decomposition. Knowledge sharing between tasks improves the generalisation ability of the model and addresses the problem of sparse medical data. The experimental results demonstrate that the proposed approach achieves superior accuracy and stability in predicting various cognitive scores of AD progression compared to single-task learning, benchmarks and state-of-the-art multi-task regression methods. The proposed approach identifies brain structural variations in patients and the important brain biomarker correlations revealed by the experiments can be utilised as potential indicators for AD early identification

    Optimizing Alzheimer's disease prediction using the nomadic people algorithm

    Get PDF
    The problem with using microarray technology to detect diseases is that not each is analytically necessary. The presence of non-essential gene data adds a computing load to the detection method. Therefore, the purpose of this study is to reduce the high-dimensional data size by determining the most critical genes involved in Alzheimer's disease progression. A study also aims to predict patients with a subset of genes that cause Alzheimer's disease. This paper uses feature selection techniques like information gain (IG) and a novel metaheuristic optimization technique based on a swarm’s algorithm derived from nomadic people’s behavior (NPO). This suggested method matches the structure of these individuals' lives movements and the search for new food sources. The method is mostly based on a multi-swarm method; there are several clans, each seeking the best foraging opportunities. Prediction is carried out after selecting the informative genes of the support vector machine (SVM), frequently used in a variety of prediction tasks. The accuracy of the prediction was used to evaluate the suggested system's performance. Its results indicate that the NPO algorithm with the SVM model returns high accuracy based on the gene subset from IG and NPO methods

    BRAIN AGE AS A MEASURE OF BRAIN RESERVE IN NEUROPSYCHIATRIC DISORDERS

    Get PDF
    Aging represents a highly heterogeneous process with highly variable clinical outcomes. Differential expression of risk and resilience factors may provide explanations for this variability. Gaining a better understanding of resilience in aging is critical as it will allow for improved individualized outcome prediction, as well as providing insight for targeted interventions that may improve the process of aging. Currently, the prevailing models of neurocognitive resilience are cognitive reserve and brain reserve. The theory of cognitive reserve suggests that those with greater cognitive reserve may better cope with loss of brain integrity through presence of more adaptable and efficient neural systems. Most studies utilize education level to assess cognitive reserve; however, many proxy measures are subjective and susceptible to hindsight bias. The concept of brain reserve overlaps with that of cognitive reserve but focuses instead on the biological characteristics that allow the brain to be resilient to the effects of aging and pathological insults. It is generally thought that with sufficient brain substrate (e.g., larger grey matter volumes, greater synaptic density, more elaborate network complexity), the brain is more capable of preserving normal functioning and maintaining homeostasis despite the presence of factors of neurodegeneration or trauma. Overall, the main goals of this dissertation are to demonstrate the impact of cognitive and brain reserve on neuropsychological outcomes and brain activation patterns (Aim 1, Chapters 2 and 3), to utilize machine learning brain age prediction as a novel proxy of brain reserve (Aim 2, Chapter 4), and to utilize brain age prediction in several iv neuropsychiatric disorders to predict outcome or gain a better understanding on the disease process (Aim 3, Chapters 5, 6, 7)

    Early diagnosis of Alzheimer's disease: the role of biomarkers including advanced EEG signal analysis. Report from the IFCN-sponsored panel of experts

    Get PDF
    Alzheimer's disease (AD) is the most common neurodegenerative disease among the elderly with a progressive decline in cognitive function significantly affecting quality of life. Both the prevalence and emotional and financial burdens of AD on patients, their families, and society are predicted to grow significantly in the near future, due to a prolongation of the lifespan. Several lines of evidence suggest that modifications of risk-enhancing life styles and initiation of pharmacological and non-pharmacological treatments in the early stage of disease, although not able to modify its course, helps to maintain personal autonomy in daily activities and significantly reduces the total costs of disease management. Moreover, many clinical trials with potentially disease-modifying drugs are devoted to prodromal stages of AD. Thus, the identification of markers of conversion from prodromal form to clinically AD may be crucial for developing strategies of early interventions. The current available markers, including volumetric magnetic resonance imaging (MRI), positron emission tomography (PET), and cerebral spinal fluid (CSF) analysis are expensive, poorly available in community health facilities, and relatively invasive. Taking into account its low cost, widespread availability and non-invasiveness, electroencephalography (EEG) would represent a candidate for tracking the prodromal phases of cognitive decline in routine clinical settings eventually in combination with other markers. In this scenario, the present paper provides an overview of epidemiology, genetic risk factors, neuropsychological, fluid and neuroimaging biomarkers in AD and describes the potential role of EEG in AD investigation, trying in particular to point out whether advanced analysis of EEG rhythms exploring brain function has sufficient specificity/sensitivity/accuracy for the early diagnosis of AD

    Deep Interpretability Methods for Neuroimaging

    Get PDF
    Brain dynamics are highly complex and yet hold the key to understanding brain function and dysfunction. The dynamics captured by resting-state functional magnetic resonance imaging data are noisy, high-dimensional, and not readily interpretable. The typical approach of reducing this data to low-dimensional features and focusing on the most predictive features comes with strong assumptions and can miss essential aspects of the underlying dynamics. In contrast, introspection of discriminatively trained deep learning models may uncover disorder-relevant elements of the signal at the level of individual time points and spatial locations. Nevertheless, the difficulty of reliable training on high-dimensional but small-sample datasets and the unclear relevance of the resulting predictive markers prevent the widespread use of deep learning in functional neuroimaging. In this dissertation, we address these challenges by proposing a deep learning framework to learn from high-dimensional dynamical data while maintaining stable, ecologically valid interpretations. The developed model is pre-trainable and alleviates the need to collect an enormous amount of neuroimaging samples to achieve optimal training. We also provide a quantitative validation module, Retain and Retrain (RAR), that can objectively verify the higher predictability of the dynamics learned by the model. Results successfully demonstrate that the proposed framework enables learning the fMRI dynamics directly from small data and capturing compact, stable interpretations of features predictive of function and dysfunction. We also comprehensively reviewed deep interpretability literature in the neuroimaging domain. Our analysis reveals the ongoing trend of interpretability practices in neuroimaging studies and identifies the gaps that should be addressed for effective human-machine collaboration in this domain. This dissertation also proposed a post hoc interpretability method, Geometrically Guided Integrated Gradients (GGIG), that leverages geometric properties of the functional space as learned by a deep learning model. With extensive experiments and quantitative validation on MNIST and ImageNet datasets, we demonstrate that GGIG outperforms integrated gradients (IG), which is considered to be a popular interpretability method in the literature. As GGIG is able to identify the contours of the discriminative regions in the input space, GGIG may be useful in various medical imaging tasks where fine-grained localization as an explanation is beneficial
    corecore