18 research outputs found
A Trio Neural Model for Dynamic Entity Relatedness Ranking
Measuring entity relatedness is a fundamental task for many natural language
processing and information retrieval applications. Prior work often studies
entity relatedness in static settings and an unsupervised manner. However,
entities in real-world are often involved in many different relationships,
consequently entity-relations are very dynamic over time. In this work, we
propose a neural networkbased approach for dynamic entity relatedness,
leveraging the collective attention as supervision. Our model is capable of
learning rich and different entity representations in a joint framework.
Through extensive experiments on large-scale datasets, we demonstrate that our
method achieves better results than competitive baselines.Comment: In Proceedings of CoNLL 201
Time-aware Multiway Adaptive Fusion Network for Temporal Knowledge Graph Question Answering
Knowledge graphs (KGs) have received increasing attention due to its wide
applications on natural language processing. However, its use case on temporal
question answering (QA) has not been well-explored. Most of existing methods
are developed based on pre-trained language models, which might not be capable
to learn \emph{temporal-specific} presentations of entities in terms of
temporal KGQA task. To alleviate this problem, we propose a novel
\textbf{T}ime-aware \textbf{M}ultiway \textbf{A}daptive (\textbf{TMA}) fusion
network. Inspired by the step-by-step reasoning behavior of humans. For each
given question, TMA first extracts the relevant concepts from the KG, and then
feeds them into a multiway adaptive module to produce a
\emph{temporal-specific} representation of the question. This representation
can be incorporated with the pre-trained KG embedding to generate the final
prediction. Empirical results verify that the proposed model achieves better
performance than the state-of-the-art models in the benchmark dataset. Notably,
the Hits@1 and Hits@10 results of TMA on the CronQuestions dataset's complex
questions are absolutely improved by 24\% and 10\% compared to the
best-performing baseline. Furthermore, we also show that TMA employing an
adaptive fusion mechanism can provide interpretability by analyzing the
proportion of information in question representations.Comment: ICASSP 202
Diachronic Embedding for Temporal Knowledge Graph Completion
Knowledge graphs (KGs) typically contain temporal facts indicating
relationships among entities at different times. Due to their incompleteness,
several approaches have been proposed to infer new facts for a KG based on the
existing ones-a problem known as KG completion. KG embedding approaches have
proved effective for KG completion, however, they have been developed mostly
for static KGs. Developing temporal KG embedding models is an increasingly
important problem. In this paper, we build novel models for temporal KG
completion through equipping static models with a diachronic entity embedding
function which provides the characteristics of entities at any point in time.
This is in contrast to the existing temporal KG embedding approaches where only
static entity features are provided. The proposed embedding function is
model-agnostic and can be potentially combined with any static model. We prove
that combining it with SimplE, a recent model for static KG embedding, results
in a fully expressive model for temporal KG completion. Our experiments
indicate the superiority of our proposal compared to existing baselines