4 research outputs found
Recommended from our members
Bifurcation analysis of a Gradient Symbolic Computation model of incremental processing
Language is ordered in time and an incremental processingsystem encounters temporary ambiguity in the middle of sen-tence comprehension. An optimal incremental processing sys-tem must solve two computational problems: On the one hand,it has to keep multiple possible interpretations without choos-ing one over the others. On the other hand, it must rejectinterpretations inconsistent with context. We propose a re-current neural network model of incremental processing thatdoes stochastic optimization of a set of soft, local constraintsto build a globally coherent structure successfully. Bifurcationanalysis of the model makes clear when and why the modelparses a sentence successfully and when and why it does not—the garden path and local coherence effects are discussed. Ourmodel provides neurally plausible solutions of the computa-tional problems arising in incremental processin
Question-Answering with Grammatically-Interpretable Representations
We introduce an architecture, the Tensor Product Recurrent Network (TPRN). In
our application of TPRN, internal representations learned by end-to-end
optimization in a deep neural network performing a textual question-answering
(QA) task can be interpreted using basic concepts from linguistic theory. No
performance penalty need be paid for this increased interpretability: the
proposed model performs comparably to a state-of-the-art system on the SQuAD QA
task. The internal representation which is interpreted is a Tensor Product
Representation: for each input word, the model selects a symbol to encode the
word, and a role in which to place the symbol, and binds the two together. The
selection is via soft attention. The overall interpretation is built from
interpretations of the symbols, as recruited by the trained model, and
interpretations of the roles as used by the model. We find support for our
initial hypothesis that symbols can be interpreted as lexical-semantic word
meanings, while roles can be interpreted as approximations of grammatical roles
(or categories) such as subject, wh-word, determiner, etc. Fine-grained
analysis reveals specific correspondences between the learned roles and parts
of speech as assigned by a standard tagger (Toutanova et al. 2003), and finds
several discrepancies in the model's favor. In this sense, the model learns
significant aspects of grammar, after having been exposed solely to
linguistically unannotated text, questions, and answers: no prior linguistic
knowledge is given to the model. What is given is the means to build
representations using symbols and roles, with an inductive bias favoring use of
these in an approximately discrete manner