24,483 research outputs found
Message Passing for Complex Question Answering over Knowledge Graphs
Question answering over knowledge graphs (KGQA) has evolved from simple
single-fact questions to complex questions that require graph traversal and
aggregation. We propose a novel approach for complex KGQA that uses
unsupervised message passing, which propagates confidence scores obtained by
parsing an input question and matching terms in the knowledge graph to a set of
possible answers. First, we identify entity, relationship, and class names
mentioned in a natural language question, and map these to their counterparts
in the graph. Then, the confidence scores of these mappings propagate through
the graph structure to locate the answer entities. Finally, these are
aggregated depending on the identified question type. This approach can be
efficiently implemented as a series of sparse matrix multiplications mimicking
joins over small local subgraphs. Our evaluation results show that the proposed
approach outperforms the state-of-the-art on the LC-QuAD benchmark. Moreover,
we show that the performance of the approach depends only on the quality of the
question interpretation results, i.e., given a correct relevance score
distribution, our approach always produces a correct answer ranking. Our error
analysis reveals correct answers missing from the benchmark dataset and
inconsistencies in the DBpedia knowledge graph. Finally, we provide a
comprehensive evaluation of the proposed approach accompanied with an ablation
study and an error analysis, which showcase the pitfalls for each of the
question answering components in more detail.Comment: Accepted in CIKM 201
Neural Motifs: Scene Graph Parsing with Global Context
We investigate the problem of producing structured graph representations of
visual scenes. Our work analyzes the role of motifs: regularly appearing
substructures in scene graphs. We present new quantitative insights on such
repeated structures in the Visual Genome dataset. Our analysis shows that
object labels are highly predictive of relation labels but not vice-versa. We
also find that there are recurring patterns even in larger subgraphs: more than
50% of graphs contain motifs involving at least two relations. Our analysis
motivates a new baseline: given object detections, predict the most frequent
relation between object pairs with the given labels, as seen in the training
set. This baseline improves on the previous state-of-the-art by an average of
3.6% relative improvement across evaluation settings. We then introduce Stacked
Motif Networks, a new architecture designed to capture higher order motifs in
scene graphs that further improves over our strong baseline by an average 7.1%
relative gain. Our code is available at github.com/rowanz/neural-motifs.Comment: CVPR 2018 camera read
Graph Neural Networks with Generated Parameters for Relation Extraction
Recently, progress has been made towards improving relational reasoning in
machine learning field. Among existing models, graph neural networks (GNNs) is
one of the most effective approaches for multi-hop relational reasoning. In
fact, multi-hop relational reasoning is indispensable in many natural language
processing tasks such as relation extraction. In this paper, we propose to
generate the parameters of graph neural networks (GP-GNNs) according to natural
language sentences, which enables GNNs to process relational reasoning on
unstructured text inputs. We verify GP-GNNs in relation extraction from text.
Experimental results on a human-annotated dataset and two distantly supervised
datasets show that our model achieves significant improvements compared to
baselines. We also perform a qualitative analysis to demonstrate that our model
could discover more accurate relations by multi-hop relational reasoning
- …