7,690 research outputs found
Knowledge Base Completion: Baselines Strike Back
Many papers have been published on the knowledge base completion task in the
past few years. Most of these introduce novel architectures for relation
learning that are evaluated on standard datasets such as FB15k and WN18. This
paper shows that the accuracy of almost all models published on the FB15k can
be outperformed by an appropriately tuned baseline - our reimplementation of
the DistMult model. Our findings cast doubt on the claim that the performance
improvements of recent models are due to architectural changes as opposed to
hyper-parameter tuning or different training objectives. This should prompt
future research to re-consider how the performance of models is evaluated and
reported
Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural Network Approach
Knowledge base completion (KBC) aims to predict missing information in a
knowledge base.In this paper, we address the out-of-knowledge-base (OOKB)
entity problem in KBC:how to answer queries concerning test entities not
observed at training time. Existing embedding-based KBC models assume that all
test entities are available at training time, making it unclear how to obtain
embeddings for new entities without costly retraining. To solve the OOKB entity
problem without retraining, we use graph neural networks (Graph-NNs) to compute
the embeddings of OOKB entities, exploiting the limited auxiliary knowledge
provided at test time.The experimental results show the effectiveness of our
proposed model in the OOKB setting.Additionally, in the standard KBC setting in
which OOKB entities are not involved, our model achieves state-of-the-art
performance on the WordNet dataset. The code and dataset are available at
https://github.com/takuo-h/GNN-for-OOKBComment: This paper has been accepted by IJCAI1
- …