Machine learning typically relies on the assumption that training and testing
distributions are identical and that data is centrally stored for training and
testing. However, in real-world scenarios, distributions may differ
significantly and data is often distributed across different devices,
organizations, or edge nodes. Consequently, it is imperative to develop models
that can effectively generalize to unseen distributions where data is
distributed across different domains. In response to this challenge, there has
been a surge of interest in federated domain generalization (FDG) in recent
years. FDG combines the strengths of federated learning (FL) and domain
generalization (DG) techniques to enable multiple source domains to
collaboratively learn a model capable of directly generalizing to unseen
domains while preserving data privacy. However, generalizing the federated
model under domain shifts is a technically challenging problem that has
received scant attention in the research area so far. This paper presents the
first survey of recent advances in this area. Initially, we discuss the
development process from traditional machine learning to domain adaptation and
domain generalization, leading to FDG as well as provide the corresponding
formal definition. Then, we categorize recent methodologies into four classes:
federated domain alignment, data manipulation, learning strategies, and
aggregation optimization, and present suitable algorithms in detail for each
category. Next, we introduce commonly used datasets, applications, evaluations,
and benchmarks. Finally, we conclude this survey by providing some potential
research topics for the future