471 research outputs found

    Learning more with less data using domain-guided machine learning: the case for health data analytics

    Get PDF
    The United States is facing a shortage of neurologists with severe consequences: a) average wait-times to see neurologists are increasing, b) patients with chronic neurological disorders are unable to receive diagnosis and care in a timely fashion, and c) there is an increase in neurologist burnout leading to physical and emotional exhaustion. Present-day neurological care relies heavily on time-consuming visual review of patient data (e.g., neuroimaging and electroencephalography (EEG)), by expert neurologists who are already in short supply. As such, the healthcare system needs creative solutions that can increase the availability of neurologists to patient care. To meet this need, this dissertation develops a machine-learning (ML)-based decision support framework for expert neurologists that focuses the experts’ attention to actionable information extracted from heterogeneous patient data and reduces the need for expert visual review. Specifically, this dissertation introduces a novel ML framework known as domain-guided machine learning (DGML) and demonstrates its usefulness by improving the clinical treatments of two major neurological diseases, epilepsy and Alzheimer’s disease. In this dissertation, the applications of this framework are illustrated through several studies conducted in collaboration with the Mayo Clinic, Rochester, Minnesota. Chapters 3, 4, and 5 describe the application of DGML to model the transient abnormal discharges in the brain activity of epilepsy patients. These studies utilized the intracranial EEG data collected from epilepsy patients to delineate seizure generating brain regions without observing actual seizures; whereas, Chapters 6, 7, 8, and 9 describe the application of DGML to model the subtle but permanent changes in brain function and anatomy, and thereby enable the early detection of chronic epilepsy and Alzheimer’s disease. These studies utilized the scalp EEG data of epilepsy patients and two population-level multimodal imaging datasets collected from elderly individuals

    Clinical applications of magnetic resonance imaging based functional and structural connectivity

    Get PDF
    Advances in computational neuroimaging techniques have expanded the armamentarium of imaging tools available for clinical applications in clinical neuroscience. Non-invasive, in vivo brain MRI structural and functional network mapping has been used to identify therapeutic targets, define eloquent brain regions to preserve, and gain insight into pathological processes and treatments as well as prognostic biomarkers. These tools have the real potential to inform patient-specific treatment strategies. Nevertheless, a realistic appraisal of clinical utility is needed that balances the growing excitement and interest in the field with important limitations associated with these techniques. Quality of the raw data, minutiae of the processing methodology, and the statistical models applied can all impact on the results and their interpretation. A lack of standardization in data acquisition and processing has also resulted in issues with reproducibility. This limitation has had a direct impact on the reliability of these tools and ultimately, confidence in their clinical use. Advances in MRI technology and computational power as well as automation and standardization of processing methods, including machine learning approaches, may help address some of these issues and make these tools more reliable in clinical use. In this review, we will highlight the current clinical uses of MRI connectomics in the diagnosis and treatment of neurological disorders; balancing emerging applications and technologies with limitations of connectivity analytic approaches to present an encompassing and appropriate perspective

    Quantitation in MRI : application to ageing and epilepsy

    No full text
    Multi-atlas propagation and label fusion techniques have recently been developed for segmenting the human brain into multiple anatomical regions. In this thesis, I investigate possible adaptations of these current state-of-the-art methods. The aim is to study ageing on the one hand, and on the other hand temporal lobe epilepsy as an example for a neurological disease. Overall effects are a confounding factor in such anatomical analyses. Intracranial volume (ICV) is often preferred to normalize for global effects as it allows to normalize for estimated maximum brain size and is hence independent of global brain volume loss, as seen in ageing and disease. I describe systematic differences in ICV measures obtained at 1.5T versus 3T, and present an automated method of measuring intracranial volume, Reverse MNI Brain Masking (RBM), based on tissue probability maps in MNI standard space. I show that this is comparable to manual measurements and robust against field strength differences. Correct and robust segmentation of target brains which show gross abnormalities, such as ventriculomegaly, is important for the study of ageing and disease. We achieved this with incorporating tissue classification information into the image registration process. The best results in elderly subjects, patients with TLE and healthy controls were achieved using a new approach using multi-atlas propagation with enhanced registration (MAPER). I then applied MAPER to the problem of automatically distinguishing patients with TLE with (TLE-HA) and without (TLE-N) hippocampal atrophy on MRI from controls, and determine the side of seizure onset. MAPER-derived structural volumes were used for a classification step consisting of selecting a set of discriminatory structures and applying support vector machine on the structural volumes as well as morphological similarity information such as volume difference obtained with spectral analysis. Acccuracies were 91-100 %, indicating that the method might be clinically useful. Finally, I used the methods developed in the previous chapters to investigate brain regional volume changes across the human lifespan in over 500 healthy subjects between 20 to 90 years of age, using data from three different scanners (2x 1.5T, 1x 3T), using the IXI database. We were able to confirm several known changes, indicating the veracity of the method. In addition, we describe the first multi-region, whole-brain database of normal ageing

    Incorporating radiomics into clinical trials: expert consensus on considerations for data-driven compared to biologically-driven quantitative biomarkers

    Get PDF
    Existing Quantitative Imaging Biomarkers (QIBs) are associated with known biological tissue characteristics and follow a well-understood path of technical, biological and clinical validation before incorporation into clinical trials. In radiomics, novel data-driven processes extract numerous visually imperceptible statistical features from the imaging data with no a priori assumptions on their correlation with biological processes. The selection of relevant features (radiomic signature) and incorporation into clinical trials therefore requires additional considerations to ensure meaningful imaging endpoints. Also, the number of radiomic features tested means that power calculations would result in sample sizes impossible to achieve within clinical trials. This article examines how the process of standardising and validating data-driven imaging biomarkers differs from those based on biological associations. Radiomic signatures are best developed initially on datasets that represent diversity of acquisition protocols as well as diversity of disease and of normal findings, rather than within clinical trials with standardised and optimised protocols as this would risk the selection of radiomic features being linked to the imaging process rather than the pathology. Normalisation through discretisation and feature harmonisation are essential pre-processing steps. Biological correlation may be performed after the technical and clinical validity of a radiomic signature is established, but is not mandatory. Feature selection may be part of discovery within a radiomics-specific trial or represent exploratory endpoints within an established trial; a previously validated radiomic signature may even be used as a primary/secondary endpoint, particularly if associations are demonstrated with specific biological processes and pathways being targeted within clinical trials

    Incorporating radiomics into clinical trials: expert consensus on considerations for data-driven compared to biologically driven quantitative biomarkers

    Get PDF
    Existing quantitative imaging biomarkers (QIBs) are associated with known biological tissue characteristics and follow a well-understood path of technical, biological and clinical validation before incorporation into clinical trials. In radiomics, novel data-driven processes extract numerous visually imperceptible statistical features from the imaging data with no a priori assumptions on their correlation with biological processes. The selection of relevant features (radiomic signature) and incorporation into clinical trials therefore requires additional considerations to ensure meaningful imaging endpoints. Also, the number of radiomic features tested means that power calculations would result in sample sizes impossible to achieve within clinical trials. This article examines how the process of standardising and validating data-driven imaging biomarkers differs from those based on biological associations. Radiomic signatures are best developed initially on datasets that represent diversity of acquisition protocols as well as diversity of disease and of normal findings, rather than within clinical trials with standardised and optimised protocols as this would risk the selection of radiomic features being linked to the imaging process rather than the pathology. Normalisation through discretisation and feature harmonisation are essential pre-processing steps. Biological correlation may be performed after the technical and clinical validity of a radiomic signature is established, but is not mandatory. Feature selection may be part of discovery within a radiomics-specific trial or represent exploratory endpoints within an established trial; a previously validated radiomic signature may even be used as a primary/secondary endpoint, particularly if associations are demonstrated with specific biological processes and pathways being targeted within clinical trials.Radiolog

    From Research to Diagnostic Application of Raman Spectroscopy in Neurosciences: Past and Perspectives

    Get PDF
    In recent years, Raman spectroscopy has been more and more frequently applied to address research questions in neuroscience. As a non-destructive technique based on inelastic scattering of photons, it can be used for a wide spectrum of applications including neurooncological tumor diagnostics or analysis of misfolded protein aggregates involved in neurodegenerative diseases. Progress in the technical development of this method allows for an increasingly detailed analysis of biological samples and may therefore open new fields of applications. The goal of our review is to provide an introduction into Raman scattering, its practical usage and also commonly associated pitfalls. Furthermore, intraoperative assessment of tumor recurrence using Raman based histology images as well as the search for non-invasive ways of diagnosis in neurodegenerative diseases are discussed. Some of the applications mentioned here may serve as a basis and possibly set the course for a future use of the technique in clinical practice. Covering a broad range of content, this overview can serve not only as a quick and accessible reference tool but also provide more in-depth information on a specific subtopic of interest

    Multimodal Data Fusion and Quantitative Analysis for Medical Applications

    Get PDF
    Medical big data is not only enormous in its size, but also heterogeneous and complex in its data structure, which makes conventional systems or algorithms difficult to process. These heterogeneous medical data include imaging data (e.g., Positron Emission Tomography (PET), Computerized Tomography (CT), Magnetic Resonance Imaging (MRI)), and non-imaging data (e.g., laboratory biomarkers, electronic medical records, and hand-written doctor notes). Multimodal data fusion is an emerging vital field to address this urgent challenge, aiming to process and analyze the complex, diverse and heterogeneous multimodal data. The fusion algorithms bring great potential in medical data analysis, by 1) taking advantage of complementary information from different sources (such as functional-structural complementarity of PET/CT images) and 2) exploiting consensus information that reflects the intrinsic essence (such as the genetic essence underlying medical imaging and clinical symptoms). Thus, multimodal data fusion benefits a wide range of quantitative medical applications, including personalized patient care, more optimal medical operation plan, and preventive public health. Though there has been extensive research on computational approaches for multimodal fusion, there are three major challenges of multimodal data fusion in quantitative medical applications, which are summarized as feature-level fusion, information-level fusion and knowledge-level fusion: • Feature-level fusion. The first challenge is to mine multimodal biomarkers from high-dimensional small-sample multimodal medical datasets, which hinders the effective discovery of informative multimodal biomarkers. Specifically, efficient dimension reduction algorithms are required to alleviate "curse of dimensionality" problem and address the criteria for discovering interpretable, relevant, non-redundant and generalizable multimodal biomarkers. • Information-level fusion. The second challenge is to exploit and interpret inter-modal and intra-modal information for precise clinical decisions. Although radiomics and multi-branch deep learning have been used for implicit information fusion guided with supervision of the labels, there is a lack of methods to explicitly explore inter-modal relationships in medical applications. Unsupervised multimodal learning is able to mine inter-modal relationship as well as reduce the usage of labor-intensive data and explore potential undiscovered biomarkers; however, mining discriminative information without label supervision is an upcoming challenge. Furthermore, the interpretation of complex non-linear cross-modal associations, especially in deep multimodal learning, is another critical challenge in information-level fusion, which hinders the exploration of multimodal interaction in disease mechanism. • Knowledge-level fusion. The third challenge is quantitative knowledge distillation from multi-focus regions on medical imaging. Although characterizing imaging features from single lesions using either feature engineering or deep learning methods have been investigated in recent years, both methods neglect the importance of inter-region spatial relationships. Thus, a topological profiling tool for multi-focus regions is in high demand, which is yet missing in current feature engineering and deep learning methods. Furthermore, incorporating domain knowledge with distilled knowledge from multi-focus regions is another challenge in knowledge-level fusion. To address the three challenges in multimodal data fusion, this thesis provides a multi-level fusion framework for multimodal biomarker mining, multimodal deep learning, and knowledge distillation from multi-focus regions. Specifically, our major contributions in this thesis include: • To address the challenges in feature-level fusion, we propose an Integrative Multimodal Biomarker Mining framework to select interpretable, relevant, non-redundant and generalizable multimodal biomarkers from high-dimensional small-sample imaging and non-imaging data for diagnostic and prognostic applications. The feature selection criteria including representativeness, robustness, discriminability, and non-redundancy are exploited by consensus clustering, Wilcoxon filter, sequential forward selection, and correlation analysis, respectively. SHapley Additive exPlanations (SHAP) method and nomogram are employed to further enhance feature interpretability in machine learning models. • To address the challenges in information-level fusion, we propose an Interpretable Deep Correlational Fusion framework, based on canonical correlation analysis (CCA) for 1) cohesive multimodal fusion of medical imaging and non-imaging data, and 2) interpretation of complex non-linear cross-modal associations. Specifically, two novel loss functions are proposed to optimize the discovery of informative multimodal representations in both supervised and unsupervised deep learning, by jointly learning inter-modal consensus and intra-modal discriminative information. An interpretation module is proposed to decipher the complex non-linear cross-modal association by leveraging interpretation methods in both deep learning and multimodal consensus learning. • To address the challenges in knowledge-level fusion, we proposed a Dynamic Topological Analysis framework, based on persistent homology, for knowledge distillation from inter-connected multi-focus regions in medical imaging and incorporation of domain knowledge. Different from conventional feature engineering and deep learning, our DTA framework is able to explicitly quantify inter-region topological relationships, including global-level geometric structure and community-level clusters. K-simplex Community Graph is proposed to construct the dynamic community graph for representing community-level multi-scale graph structure. The constructed dynamic graph is subsequently tracked with a novel Decomposed Persistence algorithm. Domain knowledge is incorporated into the Adaptive Community Profile, summarizing the tracked multi-scale community topology with additional customizable clinically important factors
    • …
    corecore