3 research outputs found

    Interpretable Domain-Aware Learning for Neuroimage Classification

    Get PDF
    In this thesis, we propose three interpretable domain-aware machine learning approaches to analyse large-scale neuroimaging data from multiple domains, e.g. multiple centres and/or demographic groups. We focus on two questions: how to learn general patterns across domains, and how to learn domain-specific patterns. Our first approach develops a feature-classifier adaptation framework for semi-supervised domain adaptation on brain decoding tasks. Based on this empirical study, we derive a dependence-based generalisation bound to guide the design of domain-aware learning algorithms. This theoretical result leads to the next two approaches. The covariate-independence regularisation approach is for learning domain-generic patterns. Incorporating hinge and least squares loss generates two covariate-independence regularised classifiers, whose superiority are validated by the experimental results on brain decoding tasks for unsupervised multi-source domain adaptation. The covariate-dependent learning approach is for learning domain-specific patterns, which can learn gender-specific patterns of brain lateralisation via employing the logistic loss. Interpretability is often essential for neuroimaging tasks. Therefore, all three domain-aware learning approaches are primarily designed to produce linear, interpretable models. These domain-aware learning approaches offer feasible ways to learn interpretable general or specific patterns from multi-domain neuroimaging data for neuroscientists to gain insights. With source code released on GitHub, this work will accelerate data-driven neuroimaging studies and advance multi-source domain adaptation research
    corecore