1 research outputs found
Learning Collaborative Agents with Rule Guidance for Knowledge Graph Reasoning
Walk-based models have shown their advantages in knowledge graph (KG)
reasoning by achieving decent performance while providing interpretable
decisions. However, the sparse reward signals offered by the KG during
traversal are often insufficient to guide a sophisticated walk-based
reinforcement learning (RL) model. An alternate approach is to use traditional
symbolic methods (e.g., rule induction), which achieve good performance but can
be hard to generalize due to the limitation of symbolic representation. In this
paper, we propose RuleGuider, which leverages high-quality rules generated by
symbolic-based methods to provide reward supervision for walk-based agents.
Experiments on benchmark datasets show that RuleGuider improves the performance
of walk-based models without losing interpretability.Comment: EMNLP 202