986 research outputs found
Active Discriminative Text Representation Learning
We propose a new active learning (AL) method for text classification with
convolutional neural networks (CNNs). In AL, one selects the instances to be
manually labeled with the aim of maximizing model performance with minimal
effort. Neural models capitalize on word embeddings as representations
(features), tuning these to the task at hand. We argue that AL strategies for
multi-layered neural models should focus on selecting instances that most
affect the embedding space (i.e., induce discriminative word representations).
This is in contrast to traditional AL approaches (e.g., entropy-based
uncertainty sampling), which specify higher level objectives. We propose a
simple approach for sentence classification that selects instances containing
words whose embeddings are likely to be updated with the greatest magnitude,
thereby rapidly learning discriminative, task-specific embeddings. We extend
this approach to document classification by jointly considering: (1) the
expected changes to the constituent word representations; and (2) the model's
current overall uncertainty regarding the instance. The relative emphasis
placed on these criteria is governed by a stochastic process that favors
selecting instances likely to improve representations at the outset of
learning, and then shifts toward general uncertainty sampling as AL progresses.
Empirical results show that our method outperforms baseline AL approaches on
both sentence and document classification tasks. We also show that, as
expected, the method quickly learns discriminative word embeddings. To the best
of our knowledge, this is the first work on AL addressing neural models for
text classification.Comment: This paper got accepted by AAAI 201
Recommended from our members
Text Classification With Deep Neural Networks
The thesis explores different extensions of Deep Neural Networks in learning underlying natural language representations and how to apply them in Natural Language Processing tasks. Novel methods of learning lower or higher level features of natural languages are given in which word and phrase dense representations are derived from unlabelled corpora. Word representations are learned by training Deep Neural Networks to predict context from each sentence while phrase representations are learned by unsupervised learning with Convolutional Restricted Boltzmann Machine. It is shown that word representations learned from architectures which preserve text input as sequences have better word similarity and relatedness than bag-of-word approaches. Additionally phrase representations learned with Convolutional Restricted Boltzmann Machine when combined with bag-of-word features improve results of text classification tasks over only bag-of-word features. Beside learning word and phrase representations, to the best of my knowledge, the work in the thesis is first to explore Deep Neural Networks in Adverse Drug Reaction detection task where my architectures when used with pre-trained word representations significantly outperform the state-of-the-art models. In addition, outputs from my proposed attentional architecture can be used to highlight important word spans without explicit training labels. In the future I propose the learned representations to be used with the discussed Deep Neural Networks in different NLP tasks such as Dialog Systems, Machine Translation or Natural Language Inference
- …