11,062 research outputs found
Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning
Most successful information extraction systems operate with access to a large
collection of documents. In this work, we explore the task of acquiring and
incorporating external evidence to improve extraction accuracy in domains where
the amount of training data is scarce. This process entails issuing search
queries, extraction from new sources and reconciliation of extracted values,
which are repeated until sufficient evidence is collected. We approach the
problem using a reinforcement learning framework where our model learns to
select optimal actions based on contextual information. We employ a deep
Q-network, trained to optimize a reward function that reflects extraction
accuracy while penalizing extra effort. Our experiments on two databases -- of
shooting incidents, and food adulteration cases -- demonstrate that our system
significantly outperforms traditional extractors and a competitive
meta-classifier baseline.Comment: Appearing in EMNLP 2016 (12 pages incl. supplementary material
FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation
We present a Few-Shot Relation Classification Dataset (FewRel), consisting of
70, 000 sentences on 100 relations derived from Wikipedia and annotated by
crowdworkers. The relation of each sentence is first recognized by distant
supervision methods, and then filtered by crowdworkers. We adapt the most
recent state-of-the-art few-shot learning methods for relation classification
and conduct a thorough evaluation of these methods. Empirical results show that
even the most competitive few-shot learning models struggle on this task,
especially as compared with humans. We also show that a range of different
reasoning skills are needed to solve our task. These results indicate that
few-shot relation classification remains an open problem and still requires
further research. Our detailed analysis points multiple directions for future
research. All details and resources about the dataset and baselines are
released on http://zhuhao.me/fewrel.Comment: EMNLP 2018. The first four authors contribute equally. The order is
determined by dice rolling. Visit our website http://zhuhao.me/fewre
- …