9 research outputs found
Domain Consistency Regularization for Unsupervised Multi-source Domain Adaptive Classification
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has
been actively studied in recent years. Compared with single-source unsupervised
domain adaptation (SUDA), domain shift in MUDA exists not only between the
source and target domains but also among multiple source domains. Most existing
MUDA algorithms focus on extracting domain-invariant representations among all
domains whereas the task-specific decision boundaries among classes are largely
neglected. In this paper, we propose an end-to-end trainable network that
exploits domain Consistency Regularization for unsupervised Multi-source domain
Adaptive classification (CRMA). CRMA aligns not only the distributions of each
pair of source and target domains but also that of all domains. For each pair
of source and target domains, we employ an intra-domain consistency to
regularize a pair of domain-specific classifiers to achieve intra-domain
alignment. In addition, we design an inter-domain consistency that targets
joint inter-domain alignment among all domains. To address different
similarities between multiple source domains and the target domain, we design
an authorization strategy that assigns different authorities to domain-specific
classifiers adaptively for optimal pseudo label prediction and self-training.
Extensive experiments show that CRMA tackles unsupervised domain adaptation
effectively under a multi-source setup and achieves superior adaptation
consistently across multiple MUDA datasets
Aligning Domain-Specific Distribution and Classifier for Cross-Domain Classification from Multiple Sources
While Unsupervised Domain Adaptation (UDA) algorithms, i.e., there are only labeled data from source domains, have been actively studied in recent years, most algorithms and theoretical results focus on Single-source Unsupervised Domain Adaptation (SUDA). However, in the practical scenario, labeled data can be typically collected from multiple diverse sources, and they might be different not only from the target domain but also from each other. Thus, domain adapters from multiple sources should not be modeled in the same way. Recent deep learning based Multi-source Unsupervised Domain Adaptation (MUDA) algorithms focus on extracting common domain-invariant representations for all domains by aligning distribution of all pairs of source and target domains in a common feature space. However, it is often very hard to extract the same domain-invariant representations for all domains in MUDA. In addition, these methods match distributions without considering domain-specific decision boundaries between classes. To solve these problems, we propose a new framework with two alignment stages for MUDA which not only respectively aligns the distributions of each pair of source and target domains in multiple specific feature spaces, but also aligns the outputs of classifiers by utilizing the domainspecific decision boundaries. Extensive experiments demonstrate that our method can achieve remarkable results on popular benchmark datasets for image classification
Recommended from our members
Enhancing the Discovery of Neural Representations: Integrating Task-Relevant Dimensionality Reduction and Domain Adaptation
In human neuroscience, machine learning models can be used to discover lower-dimensional neural representations relevant to behavior. However, these models often require large datasets and can be overfit with the small sample sizes typical in neuroimaging. To address this, we developed the Task-Relevant Autoencoder via Classifier Enhancement (TRACE) to extract behaviorally relevant representations. When tested against standard autoencoders and principal component analysis, TRACE showed up to 12% increased classification accuracy and 56% improvement in discovering task-relevant representations using fMRI data from ventral temporal cortex (VTC) of 59 subjects, highlighting its potential for behavioral data.Machine learning models applications also extend to predictive modeling and pattern discovery in modern biology. However, these models often fail to generalize across different datasets due to statistical differences. This issue also exists in neuroscience, where data are collected across various laboratories using different experimental setups. Domain adaptation can align statistical distributions across datasets, enabling model transfer and mitigating overfitting issues. In the second chapter we discussed domain adaptation in the context of small-scale, heterogeneous biological data, outlining its benefits, challenges, and key methodologies. We advocate for integrating domain adaptation techniques into computational biology, with further customized developments.Building on these insights, we used DA for understanding brain region interactions during visual processing. We examine the ventral temporal cortex (VTC) and prefrontal cortex (PFC) using Domain Adaptive Task-Relevant Autoencoding via Classifier Enhancement (DATRACE) to explore shared neural representations. DATRACE leverages domain adaptation techniques within an encoder-decoder architecture to predict voxel activities from a shared latent space, in order to ensure relevance for object recognition tasks. Preliminary results indicate that shared representations capture similar object categories in both VTC and PFC. We computed the representational dissimilarity matrix (RDM) of the shared representation between VTC and PFC and contrasted that to the RDM obtained from the low dimensional representation of VTC. Our results suggest that the nature of the information shared with PFC is very similar to those encoded in VTC. Additionally, feature perturbation analysis suggests the need for further studies to reveal the semantic interpretations of shared dimensions in these brain regions. This integrated approach underscores the potential of advanced machine learning techniques in both neuroscience and biology