287,117 research outputs found
Automatic extraction of paraphrastic phrases from medium size corpora
This paper presents a versatile system intended to acquire paraphrastic
phrases from a representative corpus. In order to decrease the time spent on
the elaboration of resources for NLP system (for example Information
Extraction, IE hereafter), we suggest to use a machine learning system that
helps defining new templates and associated resources. This knowledge is
automatically derived from the text collection, in interaction with a large
semantic network
Recommended from our members
Discrimination nets, production systems and semantic networks: Elements of a unified framework
A number of formalisms have been used in cognitive science to account for cognition in general and learning in particular. While this variety denotes a healthy state of theoretical development, it somewhat hampers communication between researchers championing different approaches and makes comparison between theories difficult. In addition, it has the consequence that researchers tend to study cognitive phenomena best suited to their favorite formalism. It is therefore desirable to propose frameworks which span traditional formalisms.
In this paper, we pursue two goals: first, to show how three (symbolic) formalisms widely used in theorizing about and in simulating human cognition—discrimination nets, semantic networks and production systems—may be used in a single, conceptually unified framework; and second to show how this framework can be used to develop a comprehensive theory of learning. Within this theory, learning is construed as (a) developing perceptual and conceptual discrimination nets, (b) adding semantic links, and (c) creating productions.
We start by giving a brief description of each of these formalisms; we then describe a theoretical framework that incorporates the three formalisms, and show how these may coexist. Throughout this description, examples from chess, a highly studied field of expertise and a classical object of study in cognitive science, will be provided. These examples will illustrate how the framework can be worked out into a more detailed cognitive theory. Finally, we draw some theoretical consequences of the framework proposed here
A Taxonomy of Deep Convolutional Neural Nets for Computer Vision
Traditional architectures for solving computer vision problems and the degree
of success they enjoyed have been heavily reliant on hand-crafted features.
However, of late, deep learning techniques have offered a compelling
alternative -- that of automatically learning problem-specific features. With
this new paradigm, every problem in computer vision is now being re-examined
from a deep learning perspective. Therefore, it has become important to
understand what kind of deep networks are suitable for a given problem.
Although general surveys of this fast-moving paradigm (i.e. deep-networks)
exist, a survey specific to computer vision is missing. We specifically
consider one form of deep networks widely used in computer vision -
convolutional neural networks (CNNs). We start with "AlexNet" as our base CNN
and then examine the broad variations proposed over time to suit different
applications. We hope that our recipe-style survey will serve as a guide,
particularly for novice practitioners intending to use deep-learning techniques
for computer vision.Comment: Published in Frontiers in Robotics and AI (http://goo.gl/6691Bm
Multi-modal gated recurrent units for image description
Using a natural language sentence to describe the content of an image is a
challenging but very important task. It is challenging because a description
must not only capture objects contained in the image and the relationships
among them, but also be relevant and grammatically correct. In this paper a
multi-modal embedding model based on gated recurrent units (GRU) which can
generate variable-length description for a given image. In the training step,
we apply the convolutional neural network (CNN) to extract the image feature.
Then the feature is imported into the multi-modal GRU as well as the
corresponding sentence representations. The multi-modal GRU learns the
inter-modal relations between image and sentence. And in the testing step, when
an image is imported to our multi-modal GRU model, a sentence which describes
the image content is generated. The experimental results demonstrate that our
multi-modal GRU model obtains the state-of-the-art performance on Flickr8K,
Flickr30K and MS COCO datasets.Comment: 25 pages, 7 figures, 6 tables, magazin
- …