LLMs usually exhibit limitations in their ability to incorporate new
knowledge, the generation of hallucinations, and the transparency of their
decision-making process. In this paper, we explore how to prompt LLMs with
knowledge graphs (KG), working as a remedy to engage LLMs with up-to-date
knowledge and elicit the reasoning pathways from LLMs. Specifically, we build a
prompting pipeline that endows LLMs with the capability of comprehending KG
inputs and inferring with a combined implicit knowledge and the retrieved
external knowledge. In addition, we investigate eliciting the mind map on which
LLMs perform the reasoning and generate the answers. It is identified that the
produced mind map exhibits the reasoning pathways of LLMs grounded on the
ontology of knowledge, hence bringing the prospects of probing and gauging LLM
inference in production. The experiments on three question & answering datasets
also show that MindMap prompting leads to a striking empirical gain. For
instance, prompting a GPT-3.5 with MindMap yields an overwhelming performance
over GPT-4 consistently. We also demonstrate that with structured facts
retrieved from KG, MindMap can outperform a series of
prompting-with-document-retrieval methods, benefiting from more accurate,
concise, and comprehensive knowledge from KGs.Comment: 7 pages, 8 figures, 9 table