289 research outputs found
Bootstrapping Lexical Choice via Multiple-Sequence Alignment
An important component of any generation system is the mapping dictionary, a
lexicon of elementary semantic expressions and corresponding natural language
realizations. Typically, labor-intensive knowledge-based methods are used to
construct the dictionary. We instead propose to acquire it automatically via a
novel multiple-pass algorithm employing multiple-sequence alignment, a
technique commonly used in bioinformatics. Crucially, our method leverages
latent information contained in multi-parallel corpora -- datasets that supply
several verbalizations of the corresponding semantics rather than just one.
We used our techniques to generate natural language versions of
computer-generated mathematical proofs, with good results on both a
per-component and overall-output basis. For example, in evaluations involving a
dozen human judges, our system produced output whose readability and
faithfulness to the semantic input rivaled that of a traditional generation
system.Comment: 8 pages; to appear in the proceedings of EMNLP-200
Language Understanding for Text-based Games Using Deep Reinforcement Learning
In this paper, we consider the task of learning control policies for
text-based games. In these games, all interactions in the virtual world are
through text and the underlying state is not observed. The resulting language
barrier makes such environments challenging for automatic game players. We
employ a deep reinforcement learning framework to jointly learn state
representations and action policies using game rewards as feedback. This
framework enables us to map text descriptions into vector representations that
capture the semantics of the game states. We evaluate our approach on two game
worlds, comparing against baselines using bag-of-words and bag-of-bigrams for
state representations. Our algorithm outperforms the baselines on both worlds
demonstrating the importance of learning expressive representations.Comment: 11 pages, Appearing at EMNLP, 201
Grounding Language for Transfer in Deep Reinforcement Learning
In this paper, we explore the utilization of natural language to drive
transfer for reinforcement learning (RL). Despite the wide-spread application
of deep RL techniques, learning generalized policy representations that work
across domains remains a challenging problem. We demonstrate that textual
descriptions of environments provide a compact intermediate channel to
facilitate effective policy transfer. Specifically, by learning to ground the
meaning of text to the dynamics of the environment such as transitions and
rewards, an autonomous agent can effectively bootstrap policy learning on a new
domain given its description. We employ a model-based RL approach consisting of
a differentiable planning module, a model-free component and a factorized state
representation to effectively use entity descriptions. Our model outperforms
prior work on both transfer and multi-task scenarios in a variety of different
environments. For instance, we achieve up to 14% and 11.5% absolute improvement
over previously existing models in terms of average and initial rewards,
respectively.Comment: JAIR 201
Molding CNNs for text: non-linear, non-consecutive convolutions
The success of deep learning often derives from well-chosen operational
building blocks. In this work, we revise the temporal convolution operation in
CNNs to better adapt it to text processing. Instead of concatenating word
representations, we appeal to tensor algebra and use low-rank n-gram tensors to
directly exploit interactions between words already at the convolution stage.
Moreover, we extend the n-gram convolution to non-consecutive words to
recognize patterns with intervening words. Through a combination of low-rank
tensors, and pattern weighting, we can efficiently evaluate the resulting
convolution operation via dynamic programming. We test the resulting
architecture on standard sentiment classification and news categorization
tasks. Our model achieves state-of-the-art performance both in terms of
accuracy and training speed. For instance, we obtain 51.2% accuracy on the
fine-grained sentiment classification task
Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning
Most successful information extraction systems operate with access to a large
collection of documents. In this work, we explore the task of acquiring and
incorporating external evidence to improve extraction accuracy in domains where
the amount of training data is scarce. This process entails issuing search
queries, extraction from new sources and reconciliation of extracted values,
which are repeated until sufficient evidence is collected. We approach the
problem using a reinforcement learning framework where our model learns to
select optimal actions based on contextual information. We employ a deep
Q-network, trained to optimize a reward function that reflects extraction
accuracy while penalizing extra effort. Our experiments on two databases -- of
shooting incidents, and food adulteration cases -- demonstrate that our system
significantly outperforms traditional extractors and a competitive
meta-classifier baseline.Comment: Appearing in EMNLP 2016 (12 pages incl. supplementary material
- …