Large language model (LLM) based knowledge graph completion (KGC) aims to
predict the missing triples in the KGs with LLMs and enrich the KGs to become
better web infrastructure, which can benefit a lot of web-based automatic
services. However, research about LLM-based KGC is limited and lacks effective
utilization of LLM's inference capabilities, which ignores the important
structural information in KGs and prevents LLMs from acquiring accurate factual
knowledge. In this paper, we discuss how to incorporate the helpful KG
structural information into the LLMs, aiming to achieve structrual-aware
reasoning in the LLMs. We first transfer the existing LLM paradigms to
structural-aware settings and further propose a knowledge prefix adapter (KoPA)
to fulfill this stated goal. KoPA employs structural embedding pre-training to
capture the structural information of entities and relations in the KG. Then
KoPA informs the LLMs of the knowledge prefix adapter which projects the
structural embeddings into the textual space and obtains virtual knowledge
tokens as a prefix of the input prompt. We conduct comprehensive experiments on
these structural-aware LLM-based KGC methods and provide an in-depth analysis
comparing how the introduction of structural information would be better for
LLM's knowledge reasoning ability. Our code is released at
https://github.com/zjukg/KoPA.Comment: Working in progres