Multi-source domain adaptation (MSDA) learns to predict the labels in target
domain data, under the setting that data from multiple source domains are
labelled and data from the target domain are unlabelled. Most methods for this
task focus on learning invariant representations across domains. However, their
success relies heavily on the assumption that the label distribution remains
consistent across domains, which may not hold in general real-world problems.
In this paper, we propose a new and more flexible assumption, termed
\textit{latent covariate shift}, where a latent content variable zc​
and a latent style variable zs​ are introduced in the generative
process, with the marginal distribution of zc​ changing across
domains and the conditional distribution of the label given zc​
remaining invariant across domains. We show that although (completely)
identifying the proposed latent causal model is challenging, the latent content
variable can be identified up to scaling by using its dependence with labels
from source domains, together with the identifiability conditions of nonlinear
ICA. This motivates us to propose a novel method for MSDA, which learns the
invariant label distribution conditional on the latent content variable,
instead of learning invariant representations. Empirical evaluation on
simulation and real data demonstrates the effectiveness of the proposed method