Domain generalization (DG) endeavors to develop robust models that possess
strong generalizability while preserving excellent discriminability.
Nonetheless, pivotal DG techniques tend to improve the feature generalizability
by learning domain-invariant representations, inadvertently overlooking the
feature discriminability. On the one hand, the simultaneous attainment of
generalizability and discriminability of features presents a complex challenge,
often entailing inherent contradictions. This challenge becomes particularly
pronounced when domain-invariant features manifest reduced discriminability
owing to the inclusion of unstable factors, \emph{i.e.,} spurious correlations.
On the other hand, prevailing domain-invariant methods can be categorized as
category-level alignment, susceptible to discarding indispensable features
possessing substantial generalizability and narrowing intra-class variations.
To surmount these obstacles, we rethink DG from a new perspective that
concurrently imbues features with formidable discriminability and robust
generalizability, and present a novel framework, namely, Discriminative
Microscopic Distribution Alignment (DMDA). DMDA incorporates two core
components: Selective Channel Pruning~(SCP) and Micro-level Distribution
Alignment (MDA). Concretely, SCP attempts to curtail redundancy within neural
networks, prioritizing stable attributes conducive to accurate classification.
This approach alleviates the adverse effect of spurious domain invariance and
amplifies the feature discriminability. Besides, MDA accentuates micro-level
alignment within each class, going beyond mere category-level alignment. This
strategy accommodates sufficient generalizable features and facilitates
within-class variations. Extensive experiments on four benchmark datasets
corroborate the efficacy of our method