Knowledge bases of real-world facts about entities and their relationships
are useful resources for a variety of natural language processing tasks.
However, because knowledge bases are typically incomplete, it is useful to be
able to perform link prediction or knowledge base completion, i.e., predict
whether a relationship not in the knowledge base is likely to be true. This
paper combines insights from several previous link prediction models into a new
embedding model STransE that represents each entity as a low-dimensional
vector, and each relation by two matrices and a translation vector. STransE is
a simple combination of the SE and TransE models, but it obtains better link
prediction performance on two benchmark datasets than previous embedding
models. Thus, STransE can serve as a new baseline for the more complex models
in the link prediction task.Comment: V1: In Proceedings of the 2016 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language
Technologies, NAACL HLT 2016. V2: Corrected citation to (Krompa{\ss} et al.,
2015). V3: A revised version of our NAACL-HLT 2016 paper with additional
experimental results and latest related wor