2,406 research outputs found
Towards Incremental Parsing of Natural Language using Recursive Neural Networks
In this paper we develop novel algorithmic ideas for building a natural language
parser grounded upon the hypothesis of incrementality. Although widely accepted
and experimentally supported under a cognitive perspective as a model of the human
parser, the incrementality assumption has never been exploited for building automatic
parsers of unconstrained real texts. The essentials of the hypothesis are that words are
processed in a left-to-right fashion, and the syntactic structure is kept totally connected
at each step.
Our proposal relies on a machine learning technique for predicting the correctness of
partial syntactic structures that are built during the parsing process. A recursive neural
network architecture is employed for computing predictions after a training phase on
examples drawn from a corpus of parsed sentences, the Penn Treebank. Our results
indicate the viability of the approach andlay out the premises for a novel generation of
algorithms for natural language processing which more closely model human parsing.
These algorithms may prove very useful in the development of eÆcient parsers
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques
specifically developed for analyzing and understanding the inner-workings and
representations acquired by neural models of language. Approaches included:
systematic manipulation of input to neural networks and investigating the
impact on their performance, testing whether interpretable knowledge can be
decoded from intermediate representations acquired by neural networks,
proposing modifications to neural network architectures to make their knowledge
state or generated output more explainable, and examining the performance of
networks on simplified or formal languages. Here we review a number of
representative studies in each category
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns
visual concepts, words, and semantic parsing of sentences without explicit
supervision on any of them; instead, our model learns by simply looking at
images and reading paired questions and answers. Our model builds an
object-based scene representation and translates sentences into executable,
symbolic programs. To bridge the learning of two modules, we use a
neuro-symbolic reasoning module that executes these programs on the latent
scene representation. Analogical to human concept learning, the perception
module learns visual concepts based on the language description of the object
being referred to. Meanwhile, the learned visual concepts facilitate learning
new words and parsing new sentences. We use curriculum learning to guide the
searching over the large compositional space of images and language. Extensive
experiments demonstrate the accuracy and efficiency of our model on learning
visual concepts, word representations, and semantic parsing of sentences.
Further, our method allows easy generalization to new object attributes,
compositions, language concepts, scenes and questions, and even new program
domains. It also empowers applications including visual question answering and
bidirectional image-text retrieval.Comment: ICLR 2019 (Oral). Project page: http://nscl.csail.mit.edu
- …