4 research outputs found
Contextual Graph Attention for Answering Logical Queries over Incomplete Knowledge Graphs
Recently, several studies have explored methods for using KG embedding to
answer logical queries. These approaches either treat embedding learning and
query answering as two separated learning tasks, or fail to deal with the
variability of contributions from different query paths. We proposed to
leverage a graph attention mechanism to handle the unequal contribution of
different query paths. However, commonly used graph attention assumes that the
center node embedding is provided, which is unavailable in this task since the
center node is to be predicted. To solve this problem we propose a multi-head
attention-based end-to-end logical query answering model, called Contextual
Graph Attention model(CGA), which uses an initial neighborhood aggregation
layer to generate the center embedding, and the whole model is trained jointly
on the original KG structure as well as the sampled query-answer pairs. We also
introduce two new datasets, DB18 and WikiGeo19, which are rather large in size
compared to the existing datasets and contain many more relation types, and use
them to evaluate the performance of the proposed model. Our result shows that
the proposed CGA with fewer learnable parameters consistently outperforms the
baseline models on both datasets as well as Bio dataset.Comment: 8 pages, 3 figures, camera ready version of article accepted to K-CAP
2019, Marina del Rey, California, United State
Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells
Unsupervised text encoding models have recently fueled substantial progress
in NLP. The key idea is to use neural networks to convert words in texts to
vector space representations based on word positions in a sentence and their
contexts, which are suitable for end-to-end training of downstream tasks. We
see a strikingly similar situation in spatial analysis, which focuses on
incorporating both absolute positions and spatial contexts of geographic
objects such as POIs into models. A general-purpose representation model for
space is valuable for a multitude of tasks. However, no such general model
exists to date beyond simply applying discretization or feed-forward nets to
coordinates, and little effort has been put into jointly modeling distributions
with vastly different characteristics, which commonly emerges from GIS data.
Meanwhile, Nobel Prize-winning Neuroscience research shows that grid cells in
mammals provide a multi-scale periodic representation that functions as a
metric for location encoding and is critical for recognizing places and for
path-integration. Therefore, we propose a representation learning model called
Space2Vec to encode the absolute positions and spatial relationships of places.
We conduct experiments on two real-world geographic data for two different
tasks: 1) predicting types of POIs given their positions and context, 2) image
classification leveraging their geo-locations. Results show that because of its
multi-scale representations, Space2Vec outperforms well-established ML
approaches such as RBF kernels, multi-layer feed-forward nets, and tile
embedding approaches for location modeling and image classification tasks.
Detailed analysis shows that all baselines can at most well handle distribution
at one scale but show poor performances in other scales. In contrast,
Space2Vec's multi-scale representation can handle distributions at different
scales.Comment: 15 pages; Accepted to ICLR 2020 as a spotlight pape
SE-KGE: A Location-Aware Knowledge Graph Embedding Model for Geographic Question Answering and Spatial Semantic Lifting
Learning knowledge graph (KG) embeddings is an emerging technique for a
variety of downstream tasks such as summarization, link prediction, information
retrieval, and question answering. However, most existing KG embedding models
neglect space and, therefore, do not perform well when applied to (geo)spatial
data and tasks. For those models that consider space, most of them primarily
rely on some notions of distance. These models suffer from higher computational
complexity during training while still losing information beyond the relative
distance between entities. In this work, we propose a location-aware KG
embedding model called SE-KGE. It directly encodes spatial information such as
point coordinates or bounding boxes of geographic entities into the KG
embedding space. The resulting model is capable of handling different types of
spatial reasoning. We also construct a geographic knowledge graph as well as a
set of geographic query-answer pairs called DBGeo to evaluate the performance
of SE-KGE in comparison to multiple baselines. Evaluation results show that
SE-KGE outperforms these baselines on the DBGeo dataset for geographic logic
query answering task. This demonstrates the effectiveness of our
spatially-explicit model and the importance of considering the scale of
different geographic entities. Finally, we introduce a novel downstream task
called spatial semantic lifting which links an arbitrary location in the study
area to entities in the KG via some relations. Evaluation on DBGeo shows that
our model outperforms the baseline by a substantial margin.Comment: Accepted to Transactions in GI
HyperQuaternionE:A hyperbolic embedding model for qualitative spatial and temporal reasoning
Qualitative spatial/temporal reasoning (QSR/QTR) plays a key role in research on human cognition, e.g., as it relates to navigation, as well as in work on robotics and artificial intelligence. Although previous work has mainly focused on various spatial and temporal calculi, more recently representation learning techniques such as embedding have been applied to reasoning and inference tasks such as query answering and knowledge base completion. These subsymbolic and learnable representations are well suited for handling noise and efficiency problems that plagued prior work. However, applying embedding techniques to spatial and temporal reasoning has received little attention to date. In this paper, we explore two research questions: (1) How do embedding-based methods perform empirically compared to traditional reasoning methods on QSR/QTR problems? (2) If the embedding-based methods are better, what causes this superiority? In order to answer these questions, we first propose a hyperbolic embedding model, called HyperQuaternionE, to capture varying properties of relations (such as symmetry and anti-symmetry), to learn inversion relations and relation compositions (i.e., composition tables), and to model hierarchical structures over entities induced by transitive relations. We conduct various experiments on two synthetic datasets to demonstrate the advantages of our proposed embedding-based method against existing embedding models as well as traditional reasoners with respect to entity inference and relation inference. Additionally, our qualitative analysis reveals that our method is able to learn conceptual neighborhoods implicitly. We conclude that the success of our method is attributed to its ability to model composition tables and learn conceptual neighbors, which are among the core building blocks of QSR/QTR