1,106 research outputs found
Do as I can, not as I get: Topology-aware multi-hop reasoning on multi-modal knowledge graphs
Multi-modal knowledge graph (MKG) includes triplets that consist of entities
and relations and multi-modal auxiliary data. In recent years, multi-hop
multi-modal knowledge graph reasoning (MMKGR) based on reinforcement learning
(RL) has received extensive attention because it addresses the intrinsic
incompleteness of MKG in an interpretable manner. However, its performance is
limited by empirically designed rewards and sparse relations. In addition, this
method has been designed for the transductive setting where test entities have
been seen during training, and it works poorly in the inductive setting where
test entities do not appear in the training set. To overcome these issues, we
propose TMR (Topology-aware Multi-hop Reasoning), which can conduct MKG
reasoning under inductive and transductive settings. Specifically, TMR mainly
consists of two components. (1) The topology-aware inductive representation
captures information from the directed relations of unseen entities, and
aggregates query-related topology features in an attentive manner to generate
the fine-grained entity-independent features. (2) After completing multi-modal
feature fusion, the relation-augment adaptive RL conducts multi-hop reasoning
by eliminating manual rewards and dynamically adding actions. Finally, we
construct new MKG datasets with different scales for inductive reasoning
evaluation. Experimental results demonstrate that TMP outperforms
state-of-the-art MKGR methods under both inductive and transductive settings
DREAM: Adaptive Reinforcement Learning based on Attention Mechanism for Temporal Knowledge Graph Reasoning
Temporal knowledge graphs (TKGs) model the temporal evolution of events and
have recently attracted increasing attention. Since TKGs are intrinsically
incomplete, it is necessary to reason out missing elements. Although existing
TKG reasoning methods have the ability to predict missing future events, they
fail to generate explicit reasoning paths and lack explainability. As
reinforcement learning (RL) for multi-hop reasoning on traditional knowledge
graphs starts showing superior explainability and performance in recent
advances, it has opened up opportunities for exploring RL techniques on TKG
reasoning. However, the performance of RL-based TKG reasoning methods is
limited due to: (1) lack of ability to capture temporal evolution and semantic
dependence jointly; (2) excessive reliance on manually designed rewards. To
overcome these challenges, we propose an adaptive reinforcement learning model
based on attention mechanism (DREAM) to predict missing elements in the future.
Specifically, the model contains two components: (1) a multi-faceted attention
representation learning method that captures semantic dependence and temporal
evolution jointly; (2) an adaptive RL framework that conducts multi-hop
reasoning by adaptively learning the reward functions. Experimental results
demonstrate DREAM outperforms state-of-the-art models on public datasetComment: 11 page
Natural Language based Context Modeling and Reasoning with LLMs: A Tutorial
Large language models (LLMs) have become phenomenally surging, since
2018--two decades after introducing context-awareness into computing systems.
Through taking into account the situations of ubiquitous devices, users and the
societies, context-aware computing has enabled a wide spectrum of innovative
applications, such as assisted living, location-based social network services
and so on. To recognize contexts and make decisions for actions accordingly,
various artificial intelligence technologies, such as Ontology and OWL, have
been adopted as representations for context modeling and reasoning. Recently,
with the rise of LLMs and their improved natural language understanding and
reasoning capabilities, it has become feasible to model contexts using natural
language and perform context reasoning by interacting with LLMs such as ChatGPT
and GPT-4. In this tutorial, we demonstrate the use of texts, prompts, and
autonomous agents (AutoAgents) that enable LLMs to perform context modeling and
reasoning without requiring fine-tuning of the model. We organize and introduce
works in the related field, and name this computing paradigm as the LLM-driven
Context-aware Computing (LCaC). In the LCaC paradigm, users' requests, sensors
reading data, and the command to actuators are supposed to be represented as
texts. Given the text of users' request and sensor data, the AutoAgent models
the context by prompting and sends to the LLM for context reasoning. LLM
generates a plan of actions and responds to the AutoAgent, which later follows
the action plan to foster context-awareness. To prove the concepts, we use two
showcases--(1) operating a mobile z-arm in an apartment for assisted living,
and (2) planning a trip and scheduling the itinerary in a context-aware and
personalized manner.Comment: Under revie
CAFE: Coarse-to-Fine Neural Symbolic Reasoning for Explainable Recommendation
Recent research explores incorporating knowledge graphs (KG) into e-commerce
recommender systems, not only to achieve better recommendation performance, but
more importantly to generate explanations of why particular decisions are made.
This can be achieved by explicit KG reasoning, where a model starts from a user
node, sequentially determines the next step, and walks towards an item node of
potential interest to the user. However, this is challenging due to the huge
search space, unknown destination, and sparse signals over the KG, so
informative and effective guidance is needed to achieve a satisfactory
recommendation quality. To this end, we propose a CoArse-to-FinE neural
symbolic reasoning approach (CAFE). It first generates user profiles as coarse
sketches of user behaviors, which subsequently guide a path-finding process to
derive reasoning paths for recommendations as fine-grained predictions. User
profiles can capture prominent user behaviors from the history, and provide
valuable signals about which kinds of path patterns are more likely to lead to
potential items of interest for the user. To better exploit the user profiles,
an improved path-finding algorithm called Profile-guided Path Reasoning (PPR)
is also developed, which leverages an inventory of neural symbolic reasoning
modules to effectively and efficiently find a batch of paths over a large-scale
KG. We extensively experiment on four real-world benchmarks and observe
substantial gains in the recommendation performance compared with
state-of-the-art methods.Comment: Accepted in CIKM 202
On the Opportunities and Challenges of Offline Reinforcement Learning for Recommender Systems
Reinforcement learning serves as a potent tool for modeling dynamic user
interests within recommender systems, garnering increasing research attention
of late. However, a significant drawback persists: its poor data efficiency,
stemming from its interactive nature. The training of reinforcement
learning-based recommender systems demands expensive online interactions to
amass adequate trajectories, essential for agents to learn user preferences.
This inefficiency renders reinforcement learning-based recommender systems a
formidable undertaking, necessitating the exploration of potential solutions.
Recent strides in offline reinforcement learning present a new perspective.
Offline reinforcement learning empowers agents to glean insights from offline
datasets and deploy learned policies in online settings. Given that recommender
systems possess extensive offline datasets, the framework of offline
reinforcement learning aligns seamlessly. Despite being a burgeoning field,
works centered on recommender systems utilizing offline reinforcement learning
remain limited. This survey aims to introduce and delve into offline
reinforcement learning within recommender systems, offering an inclusive review
of existing literature in this domain. Furthermore, we strive to underscore
prevalent challenges, opportunities, and future pathways, poised to propel
research in this evolving field.Comment: under revie
Aligning Recommendation and Conversation via Dual Imitation
Human conversations of recommendation naturally involve the shift of
interests which can align the recommendation actions and conversation process
to make accurate recommendations with rich explanations. However, existing
conversational recommendation systems (CRS) ignore the advantage of user
interest shift in connecting recommendation and conversation, which leads to an
ineffective loose coupling structure of CRS. To address this issue, by modeling
the recommendation actions as recommendation paths in a knowledge graph (KG),
we propose DICR (Dual Imitation for Conversational Recommendation), which
designs a dual imitation to explicitly align the recommendation paths and user
interest shift paths in a recommendation module and a conversation module,
respectively. By exchanging alignment signals, DICR achieves bidirectional
promotion between recommendation and conversation modules and generates
high-quality responses with accurate recommendations and coherent explanations.
Experiments demonstrate that DICR outperforms the state-of-the-art models on
recommendation and conversation performance with automatic, human, and novel
explainability metrics.Comment: EMNLP 202
- …