9,462 research outputs found
Strong Baselines for Simple Question Answering over Knowledge Graphs with and without Neural Networks
We examine the problem of question answering over knowledge graphs, focusing
on simple questions that can be answered by the lookup of a single fact.
Adopting a straightforward decomposition of the problem into entity detection,
entity linking, relation prediction, and evidence combination, we explore
simple yet strong baselines. On the popular SimpleQuestions dataset, we find
that basic LSTMs and GRUs plus a few heuristics yield accuracies that approach
the state of the art, and techniques that do not use neural networks also
perform reasonably well. These results show that gains from sophisticated deep
learning techniques proposed in the literature are quite modest and that some
previous models exhibit unnecessary complexity.Comment: Published in NAACL HLT 201
Structural Embedding of Syntactic Trees for Machine Comprehension
Deep neural networks for machine comprehension typically utilizes only word
or character embeddings without explicitly taking advantage of structured
linguistic information such as constituency trees and dependency trees. In this
paper, we propose structural embedding of syntactic trees (SEST), an algorithm
framework to utilize structured information and encode them into vector
representations that can boost the performance of algorithms for the machine
comprehension. We evaluate our approach using a state-of-the-art neural
attention model on the SQuAD dataset. Experimental results demonstrate that our
model can accurately identify the syntactic boundaries of the sentences and
extract answers that are syntactically coherent over the baseline methods
A Diagram Is Worth A Dozen Images
Diagrams are common tools for representing complex concepts, relationships
and events, often when it would be difficult to portray the same information
with natural images. Understanding natural images has been extensively studied
in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning,
the challenging task of identifying the structure of a diagram and the
semantics of its constituents and their relationships. We introduce Diagram
Parse Graphs (DPG) as our representation to model the structure of diagrams. We
define syntactic parsing of diagrams as learning to infer DPGs for diagrams and
study semantic interpretation and reasoning of diagrams in the context of
diagram question answering. We devise an LSTM-based method for syntactic
parsing of diagrams and introduce a DPG-based attention model for diagram
question answering. We compile a new dataset of diagrams with exhaustive
annotations of constituents and relationships for over 5,000 diagrams and
15,000 questions and answers. Our results show the significance of our models
for syntactic parsing and question answering in diagrams using DPGs
- …