51,703 research outputs found
Linear Recursive Distributed Representations
Connectionist networks have been criticized for their inability to represent complex structures with systematicity. That is, while they can be trained to represent and manipulate complex objects made of several constituents, they generally fail to generalize to novel combinations of the same constituents. This paper presents a modification of Pollack's Recursive Auto-Associative Memory (RAAM), that addresses this criticism. The network uses linear units and is trained with Oja's rule, in which it generalizes PCA to tree-structured data. Learned representations may be linearly combined, in order to represent new complex structures. This results in unprecedented generalization capabilities. Capacity is orders of magnitude higher than that of a RAAM trained with back-propagation. Moreover, regularities of the training set are preserved in the new formed objects. The formation of new structures displays developmental effects similar to those observed in children when learning to generalize about the argument structure of verbs
A Re-ranking Model for Dependency Parser with Recursive Convolutional Neural Network
In this work, we address the problem to model all the nodes (words or
phrases) in a dependency tree with the dense representations. We propose a
recursive convolutional neural network (RCNN) architecture to capture syntactic
and compositional-semantic representations of phrases and words in a dependency
tree. Different with the original recursive neural network, we introduce the
convolution and pooling layers, which can model a variety of compositions by
the feature maps and choose the most informative compositions by the pooling
layers. Based on RCNN, we use a discriminative model to re-rank a -best list
of candidate dependency parsing trees. The experiments show that RCNN is very
effective to improve the state-of-the-art dependency parsing on both English
and Chinese datasets
Self-Adaptive Hierarchical Sentence Model
The ability to accurately model a sentence at varying stages (e.g.,
word-phrase-sentence) plays a central role in natural language processing. As
an effort towards this goal we propose a self-adaptive hierarchical sentence
model (AdaSent). AdaSent effectively forms a hierarchy of representations from
words to phrases and then to sentences through recursive gated local
composition of adjacent segments. We design a competitive mechanism (through
gating networks) to allow the representations of the same sentence to be
engaged in a particular learning task (e.g., classification), therefore
effectively mitigating the gradient vanishing problem persistent in other
recursive models. Both qualitative and quantitative analysis shows that AdaSent
can automatically form and select the representations suitable for the task at
hand during training, yielding superior classification performance over
competitor models on 5 benchmark data sets.Comment: 8 pages, 7 figures, accepted as a full paper at IJCAI 201
A Multiplicative Model for Learning Distributed Text-Based Attribute Representations
In this paper we propose a general framework for learning distributed
representations of attributes: characteristics of text whose representations
can be jointly learned with word embeddings. Attributes can correspond to
document indicators (to learn sentence vectors), language indicators (to learn
distributed language representations), meta-data and side information (such as
the age, gender and industry of a blogger) or representations of authors. We
describe a third-order model where word context and attribute vectors interact
multiplicatively to predict the next word in a sequence. This leads to the
notion of conditional word similarity: how meanings of words change when
conditioned on different attributes. We perform several experimental tasks
including sentiment classification, cross-lingual document classification, and
blog authorship attribution. We also qualitatively evaluate conditional word
neighbours and attribute-conditioned text generation.Comment: 11 pages. An earlier version was accepted to the ICML-2014 Workshop
on Knowledge-Powered Deep Learning for Text Minin
Distributed Representations of Sentences and Documents
Many machine learning algorithms require the input to be represented as a
fixed-length feature vector. When it comes to texts, one of the most common
fixed-length features is bag-of-words. Despite their popularity, bag-of-words
features have two major weaknesses: they lose the ordering of the words and
they also ignore semantics of the words. For example, "powerful," "strong" and
"Paris" are equally distant. In this paper, we propose Paragraph Vector, an
unsupervised algorithm that learns fixed-length feature representations from
variable-length pieces of texts, such as sentences, paragraphs, and documents.
Our algorithm represents each document by a dense vector which is trained to
predict words in the document. Its construction gives our algorithm the
potential to overcome the weaknesses of bag-of-words models. Empirical results
show that Paragraph Vectors outperform bag-of-words models as well as other
techniques for text representations. Finally, we achieve new state-of-the-art
results on several text classification and sentiment analysis tasks
Skip-Thought Vectors
We describe an approach for unsupervised learning of a generic, distributed
sentence encoder. Using the continuity of text from books, we train an
encoder-decoder model that tries to reconstruct the surrounding sentences of an
encoded passage. Sentences that share semantic and syntactic properties are
thus mapped to similar vector representations. We next introduce a simple
vocabulary expansion method to encode words that were not seen as part of
training, allowing us to expand our vocabulary to a million words. After
training our model, we extract and evaluate our vectors with linear models on 8
tasks: semantic relatedness, paraphrase detection, image-sentence ranking,
question-type classification and 4 benchmark sentiment and subjectivity
datasets. The end result is an off-the-shelf encoder that can produce highly
generic sentence representations that are robust and perform well in practice.
We will make our encoder publicly available.Comment: 11 page
- …