2,704 research outputs found
Co-regularized Alignment for Unsupervised Domain Adaptation
Deep neural networks, trained with large amount of labeled data, can fail to
generalize well when tested with examples from a \emph{target domain} whose
distribution differs from the training data distribution, referred as the
\emph{source domain}. It can be expensive or even infeasible to obtain required
amount of labeled data in all possible domains. Unsupervised domain adaptation
sets out to address this problem, aiming to learn a good predictive model for
the target domain using labeled examples from the source domain but only
unlabeled examples from the target domain. Domain alignment approaches this
problem by matching the source and target feature distributions, and has been
used as a key component in many state-of-the-art domain adaptation methods.
However, matching the marginal feature distributions does not guarantee that
the corresponding class conditional distributions will be aligned across the
two domains. We propose co-regularized domain alignment for unsupervised domain
adaptation, which constructs multiple diverse feature spaces and aligns source
and target distributions in each of them individually, while encouraging that
alignments agree with each other with regard to the class predictions on the
unlabeled target examples. The proposed method is generic and can be used to
improve any domain adaptation method which uses domain alignment. We
instantiate it in the context of a recent state-of-the-art method and observe
that it provides significant performance improvements on several domain
adaptation benchmarks.Comment: NIPS 2018 accepted versio
EEG-Based Emotion Recognition Using Regularized Graph Neural Networks
Electroencephalography (EEG) measures the neuronal activities in different
brain regions via electrodes. Many existing studies on EEG-based emotion
recognition do not fully exploit the topology of EEG channels. In this paper,
we propose a regularized graph neural network (RGNN) for EEG-based emotion
recognition. RGNN considers the biological topology among different brain
regions to capture both local and global relations among different EEG
channels. Specifically, we model the inter-channel relations in EEG signals via
an adjacency matrix in a graph neural network where the connection and
sparseness of the adjacency matrix are inspired by neuroscience theories of
human brain organization. In addition, we propose two regularizers, namely
node-wise domain adversarial training (NodeDAT) and emotion-aware distribution
learning (EmotionDL), to better handle cross-subject EEG variations and noisy
labels, respectively. Extensive experiments on two public datasets, SEED and
SEED-IV, demonstrate the superior performance of our model than
state-of-the-art models in most experimental settings. Moreover, ablation
studies show that the proposed adjacency matrix and two regularizers contribute
consistent and significant gain to the performance of our RGNN model. Finally,
investigations on the neuronal activities reveal important brain regions and
inter-channel relations for EEG-based emotion recognition
Cross Language Text Classification via Subspace Co-Regularized Multi-View Learning
In many multilingual text classification problems, the documents in different
languages often share the same set of categories. To reduce the labeling cost
of training a classification model for each individual language, it is
important to transfer the label knowledge gained from one language to another
language by conducting cross language classification. In this paper we develop
a novel subspace co-regularized multi-view learning method for cross language
text classification. This method is built on parallel corpora produced by
machine translation. It jointly minimizes the training error of each classifier
in each language while penalizing the distance between the subspace
representations of parallel documents. Our empirical study on a large set of
cross language text classification tasks shows the proposed method consistently
outperforms a number of inductive methods, domain adaptation methods, and
multi-view learning methods.Comment: Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012
- …