4,402 research outputs found
Heterogeneous Domain Generalization via Domain Mixup
One of the main drawbacks of deep Convolutional Neural Networks (DCNN) is
that they lack generalization capability. In this work, we focus on the problem
of heterogeneous domain generalization which aims to improve the generalization
capability across different tasks, which is, how to learn a DCNN model with
multiple domain data such that the trained feature extractor can be generalized
to supporting recognition of novel categories in a novel target domain. To
solve this problem, we propose a novel heterogeneous domain generalization
method by mixing up samples across multiple source domains with two different
sampling strategies. Our experimental results based on the Visual Decathlon
benchmark demonstrates the effectiveness of our proposed method. The code is
released in \url{https://github.com/wyf0912/MIXALL
Federated Domain Generalization: A Survey
Machine learning typically relies on the assumption that training and testing
distributions are identical and that data is centrally stored for training and
testing. However, in real-world scenarios, distributions may differ
significantly and data is often distributed across different devices,
organizations, or edge nodes. Consequently, it is imperative to develop models
that can effectively generalize to unseen distributions where data is
distributed across different domains. In response to this challenge, there has
been a surge of interest in federated domain generalization (FDG) in recent
years. FDG combines the strengths of federated learning (FL) and domain
generalization (DG) techniques to enable multiple source domains to
collaboratively learn a model capable of directly generalizing to unseen
domains while preserving data privacy. However, generalizing the federated
model under domain shifts is a technically challenging problem that has
received scant attention in the research area so far. This paper presents the
first survey of recent advances in this area. Initially, we discuss the
development process from traditional machine learning to domain adaptation and
domain generalization, leading to FDG as well as provide the corresponding
formal definition. Then, we categorize recent methodologies into four classes:
federated domain alignment, data manipulation, learning strategies, and
aggregation optimization, and present suitable algorithms in detail for each
category. Next, we introduce commonly used datasets, applications, evaluations,
and benchmarks. Finally, we conclude this survey by providing some potential
research topics for the future
- …