The most successful multi-domain text classification (MDTC) approaches employ
the shared-private paradigm to facilitate the enhancement of domain-invariant
features through domain-specific attributes. Additionally, they employ
adversarial training to align marginal feature distributions. Nevertheless,
these methodologies encounter two primary challenges: (1) Neglecting
class-aware information during adversarial alignment poses a risk of
misalignment; (2) The limited availability of labeled data across multiple
domains fails to ensure adequate discriminative capacity for the model. To
tackle these issues, we propose a method called Regularized Conditional
Alignment (RCA) to align the joint distributions of domains and classes, thus
matching features within the same category and amplifying the discriminative
qualities of acquired features. Moreover, we employ entropy minimization and
virtual adversarial training to constrain the uncertainty of predictions
pertaining to unlabeled data and enhance the model's robustness. Empirical
results on two benchmark datasets demonstrate that our RCA approach outperforms
state-of-the-art MDTC techniques.Comment: This paper has been accepted by ICASSP 202