7,496 research outputs found
Generative-Discriminative Complementary Learning
Majority of state-of-the-art deep learning methods are discriminative
approaches, which model the conditional distribution of labels given inputs
features. The success of such approaches heavily depends on high-quality
labeled instances, which are not easy to obtain, especially as the number of
candidate classes increases. In this paper, we study the complementary learning
problem. Unlike ordinary labels, complementary labels are easy to obtain
because an annotator only needs to provide a yes/no answer to a randomly chosen
candidate class for each instance. We propose a generative-discriminative
complementary learning method that estimates the ordinary labels by modeling
both the conditional (discriminative) and instance (generative) distributions.
Our method, we call Complementary Conditional GAN (CCGAN), improves the
accuracy of predicting ordinary labels and can generate high-quality instances
in spite of weak supervision. In addition to the extensive empirical studies,
we also theoretically show that our model can retrieve the true conditional
distribution from the complementarily-labeled data
Unsupervised, Efficient and Semantic Expertise Retrieval
We introduce an unsupervised discriminative model for the task of retrieving
experts in online document collections. We exclusively employ textual evidence
and avoid explicit feature engineering by learning distributed word
representations in an unsupervised way. We compare our model to
state-of-the-art unsupervised statistical vector space and probabilistic
generative approaches. Our proposed log-linear model achieves the retrieval
performance levels of state-of-the-art document-centric methods with the low
inference cost of so-called profile-centric approaches. It yields a
statistically significant improved ranking over vector space and generative
models in most cases, matching the performance of supervised methods on various
benchmarks. That is, by using solely text we can do as well as methods that
work with external evidence and/or relevance feedback. A contrastive analysis
of rankings produced by discriminative and generative approaches shows that
they have complementary strengths due to the ability of the unsupervised
discriminative model to perform semantic matching.Comment: WWW2016, Proceedings of the 25th International Conference on World
Wide Web. 201
Discriminative Tandem Features for HMM-based EEG Classification
Abstract—We investigate the use of discriminative feature extractors in tandem configuration with generative EEG classification system. Existing studies on dynamic EEG classification typically use hidden Markov models (HMMs) which lack discriminative capability. In this paper, a linear and a non-linear classifier are discriminatively trained to produce complementary input features to the conventional HMM system. Two sets of tandem features are derived from linear discriminant analysis (LDA) projection output and multilayer perceptron (MLP) class-posterior probability, before appended to the standard autoregressive (AR) features. Evaluation on a two-class motor-imagery classification task shows that both the proposed tandem features yield consistent gains over the AR baseline, resulting in significant relative improvement of 6.2% and 11.2 % for the LDA and MLP features respectively. We also explore portability of these features across different subjects. Index Terms- Artificial neural network-hidden Markov models, EEG classification, brain-computer-interface (BCI)
- …