39 research outputs found
Low-Dimensional Hyperbolic Knowledge Graph Embeddings
Knowledge graph (KG) embeddings learn low-dimensional representations of
entities and relations to predict missing facts. KGs often exhibit hierarchical
and logical patterns which must be preserved in the embedding space. For
hierarchical data, hyperbolic embedding methods have shown promise for
high-fidelity and parsimonious representations. However, existing hyperbolic
embedding methods do not account for the rich logical patterns in KGs. In this
work, we introduce a class of hyperbolic KG embedding models that
simultaneously capture hierarchical and logical patterns. Our approach combines
hyperbolic reflections and rotations with attention to model complex relational
patterns. Experimental results on standard KG benchmarks show that our method
improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in
mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that
different geometric transformations capture different types of relations while
attention-based transformations generalize to multiple relations. In high
dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR
and 57.7% on YAGO3-10
Improving Heterogeneous Graph Learning with Weighted Mixed-Curvature Product Manifold
In graph representation learning, it is important that the complex geometric
structure of the input graph, e.g. hidden relations among nodes, is well
captured in embedding space. However, standard Euclidean embedding spaces have
a limited capacity in representing graphs of varying structures. A promising
candidate for the faithful embedding of data with varying structure is product
manifolds of component spaces of different geometries (spherical, hyperbolic,
or euclidean). In this paper, we take a closer look at the structure of product
manifold embedding spaces and argue that each component space in a product
contributes differently to expressing structures in the input graph, hence
should be weighted accordingly. This is different from previous works which
consider the roles of different components equally. We then propose
WEIGHTED-PM, a data-driven method for learning embedding of heterogeneous
graphs in weighted product manifolds. Our method utilizes the topological
information of the input graph to automatically determine the weight of each
component in product spaces. Extensive experiments on synthetic and real-world
graph datasets demonstrate that WEIGHTED-PM is capable of learning better graph
representations with lower geometric distortion from input data, and performs
better on multiple downstream tasks, such as word similarity learning, top-
recommendation, and knowledge graph embedding
From Discrimination to Generation: Knowledge Graph Completion with Generative Transformer
Knowledge graph completion aims to address the problem of extending a KG with
missing triples. In this paper, we provide an approach GenKGC, which converts
knowledge graph completion to sequence-to-sequence generation task with the
pre-trained language model. We further introduce relation-guided demonstration
and entity-aware hierarchical decoding for better representation learning and
fast inference. Experimental results on three datasets show that our approach
can obtain better or comparable performance than baselines and achieve faster
inference speed compared with previous methods with pre-trained language
models. We also release a new large-scale Chinese knowledge graph dataset
AliopenKG500 for research purpose. Code and datasets are available in
https://github.com/zjunlp/PromptKG/tree/main/GenKGC.Comment: Accepted by WWW 2022 Poste
From Wide to Deep: Dimension Lifting Network for Parameter-efficient Knowledge Graph Embedding
Knowledge graph embedding (KGE) that maps entities and relations into vector
representations is essential for downstream applications. Conventional KGE
methods require high-dimensional representations to learn the complex structure
of knowledge graph, but lead to oversized model parameters. Recent advances
reduce parameters by low-dimensional entity representations, while developing
techniques (e.g., knowledge distillation or reinvented representation forms) to
compensate for reduced dimension. However, such operations introduce
complicated computations and model designs that may not benefit large knowledge
graphs. To seek a simple strategy to improve the parameter efficiency of
conventional KGE models, we take inspiration from that deeper neural networks
require exponentially fewer parameters to achieve expressiveness comparable to
wider networks for compositional structures. We view all entity representations
as a single-layer embedding network, and conventional KGE methods that adopt
high-dimensional entity representations equal widening the embedding network to
gain expressiveness. To achieve parameter efficiency, we instead propose a
deeper embedding network for entity representations, i.e., a narrow entity
embedding layer plus a multi-layer dimension lifting network (LiftNet).
Experiments on three public datasets show that by integrating LiftNet, four
conventional KGE methods with 16-dimensional representations achieve comparable
link prediction accuracy as original models that adopt 512-dimensional
representations, saving 68.4% to 96.9% parameters
Modeling Fine-grained Information via Knowledge-aware Hierarchical Graph for Zero-shot Entity Retrieval
Zero-shot entity retrieval, aiming to link mentions to candidate entities
under the zero-shot setting, is vital for many tasks in Natural Language
Processing. Most existing methods represent mentions/entities via the sentence
embeddings of corresponding context from the Pre-trained Language Model.
However, we argue that such coarse-grained sentence embeddings can not fully
model the mentions/entities, especially when the attention scores towards
mentions/entities are relatively low. In this work, we propose GER, a
\textbf{G}raph enhanced \textbf{E}ntity \textbf{R}etrieval framework, to
capture more fine-grained information as complementary to sentence embeddings.
We extract the knowledge units from the corresponding context and then
construct a mention/entity centralized graph. Hence, we can learn the
fine-grained information about mention/entity by aggregating information from
these knowledge units. To avoid the graph information bottleneck for the
central mention/entity node, we construct a hierarchical graph and design a
novel Hierarchical Graph Attention Network~(HGAN). Experimental results on
popular benchmarks demonstrate that our proposed GER framework performs better
than previous state-of-the-art models. The code has been available at
https://github.com/wutaiqiang/GER-WSDM2023.Comment: 9 pages, 5 figure
HyperFormer:Enhancing entity and relation interaction for hyper-relational knowledge graph completion
Hyper-relational knowledge graphs (HKGs) extend standard knowledge graphs by associating attribute-value qualifiers to triples, which effectively represent additional fine-grained information about its associated triple. Hyper-relational knowledge graph completion (HKGC) aims at inferring unknown triples while considering its qualifiers. Most existing approaches to HKGC exploit a global-level graph structure to encode hyper-relational knowledge into the graph convolution message passing process. However, the addition of multi-hop information might bring noise into the triple prediction process. To address this problem, we propose HyperFormer, a model that considers local-level sequential information, which encodes the content of the entities, relations and qualifiers of a triple. More precisely, HyperFormer is composed of three different modules: an entity neighbor aggregator module allowing to integrate the information of the neighbors of an entity to capture different perspectives of it; a relation qualifier aggregator module to integrate hyper-relational knowledge into the corresponding relation to refine the representation of relational content; a convolution-based bidirectional interaction module based on a convolutional operation, capturing pairwise bidirectional interactions of entity-relation, entity-qualifier, and relation-qualifier. Furthermore, we introduce a Mixture-of-Experts strategy into the feed-forward layers of HyperFormer to strengthen its representation capabilities while reducing the amount of model parameters and computation. Extensive experiments on three well-known datasets with four different conditions demonstrate HyperFormer's effectiveness
Logic Diffusion for Knowledge Graph Reasoning
Most recent works focus on answering first order logical queries to explore
the knowledge graph reasoning via multi-hop logic predictions. However,
existing reasoning models are limited by the circumscribed logical paradigms of
training samples, which leads to a weak generalization of unseen logic. To
address these issues, we propose a plug-in module called Logic Diffusion (LoD)
to discover unseen queries from surroundings and achieves dynamical equilibrium
between different kinds of patterns. The basic idea of LoD is relation
diffusion and sampling sub-logic by random walking as well as a special
training mechanism called gradient adaption. Besides, LoD is accompanied by a
novel loss function to further achieve the robust logical diffusion when facing
noisy data in training or testing sets. Extensive experiments on four public
datasets demonstrate the superiority of mainstream knowledge graph reasoning
models with LoD over state-of-the-art. Moreover, our ablation study proves the
general effectiveness of LoD on the noise-rich knowledge graph.Comment: 10 pages, 6 figure