Federated Learning (FL) facilitates distributed model development to
aggregate multiple confidential data sources. The information transfer among
clients can be compromised by distributional differences, i.e., by non-i.i.d.
data. A particularly challenging scenario is the federated model adaptation to
a target client without access to annotated data. We propose Federated
Adversarial Cross Training (FACT), which uses the implicit domain differences
between source clients to identify domain shifts in the target domain. In each
round of FL, FACT cross initializes a pair of source clients to generate domain
specialized representations which are then used as a direct adversary to learn
a domain invariant data representation. We empirically show that FACT
outperforms state-of-the-art federated, non-federated and source-free domain
adaptation models on three popular multi-source-single-target benchmarks, and
state-of-the-art Unsupervised Domain Adaptation (UDA) models on
single-source-single-target experiments. We further study FACT's behavior with
respect to communication restrictions and the number of participating clients