4,196 research outputs found
Semi-Supervised Learning by Augmented Distribution Alignment
In this work, we propose a simple yet effective semi-supervised learning
approach called Augmented Distribution Alignment. We reveal that an essential
sampling bias exists in semi-supervised learning due to the limited number of
labeled samples, which often leads to a considerable empirical distribution
mismatch between labeled data and unlabeled data. To this end, we propose to
align the empirical distributions of labeled and unlabeled data to alleviate
the bias. On one hand, we adopt an adversarial training strategy to minimize
the distribution distance between labeled and unlabeled data as inspired by
domain adaptation works. On the other hand, to deal with the small sample size
issue of labeled data, we also propose a simple interpolation strategy to
generate pseudo training samples. Those two strategies can be easily
implemented into existing deep neural networks. We demonstrate the
effectiveness of our proposed approach on the benchmark SVHN and CIFAR10
datasets. Our code is available at \url{https://github.com/qinenergy/adanet}.Comment: To appear in ICCV 201
FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation
We present a Few-Shot Relation Classification Dataset (FewRel), consisting of
70, 000 sentences on 100 relations derived from Wikipedia and annotated by
crowdworkers. The relation of each sentence is first recognized by distant
supervision methods, and then filtered by crowdworkers. We adapt the most
recent state-of-the-art few-shot learning methods for relation classification
and conduct a thorough evaluation of these methods. Empirical results show that
even the most competitive few-shot learning models struggle on this task,
especially as compared with humans. We also show that a range of different
reasoning skills are needed to solve our task. These results indicate that
few-shot relation classification remains an open problem and still requires
further research. Our detailed analysis points multiple directions for future
research. All details and resources about the dataset and baselines are
released on http://zhuhao.me/fewrel.Comment: EMNLP 2018. The first four authors contribute equally. The order is
determined by dice rolling. Visit our website http://zhuhao.me/fewre
- …