Recommendation systems aim to provide users with relevant suggestions, but
often lack interpretability and fail to capture higher-level semantic
relationships between user behaviors and profiles. In this paper, we propose a
novel approach that leverages large language models (LLMs) to construct
personalized reasoning graphs. These graphs link a user's profile and
behavioral sequences through causal and logical inferences, representing the
user's interests in an interpretable way. Our approach, LLM reasoning graphs
(LLMRG), has four components: chained graph reasoning, divergent extension,
self-verification and scoring, and knowledge base self-improvement. The
resulting reasoning graph is encoded using graph neural networks, which serves
as additional input to improve conventional recommender systems, without
requiring extra user or item information. Our approach demonstrates how LLMs
can enable more logical and interpretable recommender systems through
personalized reasoning graphs. LLMRG allows recommendations to benefit from
both engineered recommendation systems and LLM-derived reasoning graphs. We
demonstrate the effectiveness of LLMRG on benchmarks and real-world scenarios
in enhancing base recommendation models.Comment: 12 pages, 6 figure