8,110 research outputs found
Rehabilitation of Count-based Models for Word Vector Representations
Recent works on word representations mostly rely on predictive models.
Distributed word representations (aka word embeddings) are trained to optimally
predict the contexts in which the corresponding words tend to appear. Such
models have succeeded in capturing word similarties as well as semantic and
syntactic regularities. Instead, we aim at reviving interest in a model based
on counts. We present a systematic study of the use of the Hellinger distance
to extract semantic representations from the word co-occurence statistics of
large text corpora. We show that this distance gives good performance on word
similarity and analogy tasks, with a proper type and size of context, and a
dimensionality reduction based on a stochastic low-rank approximation. Besides
being both simple and intuitive, this method also provides an encoding function
which can be used to infer unseen words or phrases. This becomes a clear
advantage compared to predictive models which must train these new words.Comment: A. Gelbukh (Ed.), Springer International Publishing Switzerlan
Dirichlet belief networks for topic structure learning
Recently, considerable research effort has been devoted to developing deep
architectures for topic models to learn topic structures. Although several deep
models have been proposed to learn better topic proportions of documents, how
to leverage the benefits of deep structures for learning word distributions of
topics has not yet been rigorously studied. Here we propose a new multi-layer
generative process on word distributions of topics, where each layer consists
of a set of topics and each topic is drawn from a mixture of the topics of the
layer above. As the topics in all layers can be directly interpreted by words,
the proposed model is able to discover interpretable topic hierarchies. As a
self-contained module, our model can be flexibly adapted to different kinds of
topic models to improve their modelling accuracy and interpretability.
Extensive experiments on text corpora demonstrate the advantages of the
proposed model.Comment: accepted in NIPS 201
Phrase-based Image Captioning
Generating a novel textual description of an image is an interesting problem
that connects computer vision and natural language processing. In this paper,
we present a simple model that is able to generate descriptive sentences given
a sample image. This model has a strong focus on the syntax of the
descriptions. We train a purely bilinear model that learns a metric between an
image representation (generated from a previously trained Convolutional Neural
Network) and phrases that are used to described them. The system is then able
to infer phrases from a given image sample. Based on caption syntax statistics,
we propose a simple language model that can produce relevant descriptions for a
given test image using the phrases inferred. Our approach, which is
considerably simpler than state-of-the-art models, achieves comparable results
in two popular datasets for the task: Flickr30k and the recently proposed
Microsoft COCO
A Developmental Neuro-Robotics Approach for Boosting the Recognition of Handwritten Digits
Developmental psychology and neuroimaging
research identified a close link between numbers and fingers,
which can boost the initial number knowledge in children. Recent
evidence shows that a simulation of the children's embodied
strategies can improve the machine intelligence too. This article
explores the application of embodied strategies to convolutional
neural network models in the context of developmental neurorobotics, where the training information is likely to be gradually
acquired while operating rather than being abundant and fully
available as the classical machine learning scenarios. The
experimental analyses show that the proprioceptive information
from the robot fingers can improve network accuracy in the
recognition of handwritten Arabic digits when training examples
and epochs are few. This result is comparable to brain imaging
and longitudinal studies with young children. In conclusion, these
findings also support the relevance of the embodiment in the case
of artificial agents’ training and show a possible way for the
humanization of the learning process, where the robotic body can
express the internal processes of artificial intelligence making it
more understandable for humans
Evaluation of Distributional Models with the Outlier Detection Task
In this article, we define the outlier detection task and use it to compare neural-based word embeddings with transparent count-based distributional representations. Using the English Wikipedia as text source to train the models, we observed that embeddings outperform count-based representations when their contexts are made up of bag-of-words. However, there are no sharp differences between the two models if the word contexts are defined as syntactic dependencies. In general, syntax-based models tend to perform better than those based on bag-of-words for this specific task. Similar experiments were carried out for Portuguese with similar results. The test datasets we have created for outlier detection task in English and Portuguese are released
- …