14,871 research outputs found
Query2GMM: Learning Representation with Gaussian Mixture Model for Reasoning over Knowledge Graphs
Logical query answering over Knowledge Graphs (KGs) is a fundamental yet
complex task. A promising approach to achieve this is to embed queries and
entities jointly into the same embedding space. Research along this line
suggests that using multi-modal distribution to represent answer entities is
more suitable than uni-modal distribution, as a single query may contain
multiple disjoint answer subsets due to the compositional nature of multi-hop
queries and the varying latent semantics of relations. However, existing
methods based on multi-modal distribution roughly represent each subset without
capturing its accurate cardinality, or even degenerate into uni-modal
distribution learning during the reasoning process due to the lack of an
effective similarity measure. To better model queries with diversified answers,
we propose Query2GMM for answering logical queries over knowledge graphs. In
Query2GMM, we present the GMM embedding to represent each query using a
univariate Gaussian Mixture Model (GMM). Each subset of a query is encoded by
its cardinality, semantic center and dispersion degree, allowing for precise
representation of multiple subsets. Then we design specific neural networks for
each operator to handle the inherent complexity that comes with multi-modal
distribution while alleviating the cascading errors. Last, we define a new
similarity measure to assess the relationships between an entity and a query's
multi-answer subsets, enabling effective multi-modal distribution learning for
reasoning. Comprehensive experimental results show that Query2GMM outperforms
the best competitor by an absolute average of . The source code is
available at \url{https://anonymous.4open.science/r/Query2GMM-C42F}
Gravity-Inspired Graph Autoencoders for Directed Link Prediction
Graph autoencoders (AE) and variational autoencoders (VAE) recently emerged
as powerful node embedding methods. In particular, graph AE and VAE were
successfully leveraged to tackle the challenging link prediction problem,
aiming at figuring out whether some pairs of nodes from a graph are connected
by unobserved edges. However, these models focus on undirected graphs and
therefore ignore the potential direction of the link, which is limiting for
numerous real-life applications. In this paper, we extend the graph AE and VAE
frameworks to address link prediction in directed graphs. We present a new
gravity-inspired decoder scheme that can effectively reconstruct directed
graphs from a node embedding. We empirically evaluate our method on three
different directed link prediction tasks, for which standard graph AE and VAE
perform poorly. We achieve competitive results on three real-world graphs,
outperforming several popular baselines.Comment: ACM International Conference on Information and Knowledge Management
(CIKM 2019
New Embedded Representations and Evaluation Protocols for Inferring Transitive Relations
Beyond word embeddings, continuous representations of knowledge graph (KG)
components, such as entities, types and relations, are widely used for entity
mention disambiguation, relation inference and deep question answering. Great
strides have been made in modeling general, asymmetric or antisymmetric KG
relations using Gaussian, holographic, and complex embeddings. None of these
directly enforce transitivity inherent in the is-instance-of and is-subtype-of
relations. A recent proposal, called order embedding (OE), demands that the
vector representing a subtype elementwise dominates the vector representing a
supertype. However, the manner in which such constraints are asserted and
evaluated have some limitations. In this short research note, we make three
contributions specific to representing and inferring transitive relations.
First, we propose and justify a significant improvement to the OE loss
objective. Second, we propose a new representation of types as
hyper-rectangular regions, that generalize and improve on OE. Third, we show
that some current protocols to evaluate transitive relation inference can be
misleading, and offer a sound alternative. Rather than use black-box deep
learning modules off-the-shelf, we develop our training networks using
elementary geometric considerations.Comment: Accepted at SIGIR 201
- …