30 research outputs found
Explainable Reasoning over Knowledge Graphs for Recommendation
Incorporating knowledge graph into recommender systems has attracted
increasing attention in recent years. By exploring the interlinks within a
knowledge graph, the connectivity between users and items can be discovered as
paths, which provide rich and complementary information to user-item
interactions. Such connectivity not only reveals the semantics of entities and
relations, but also helps to comprehend a user's interest. However, existing
efforts have not fully explored this connectivity to infer user preferences,
especially in terms of modeling the sequential dependencies within and holistic
semantics of a path. In this paper, we contribute a new model named
Knowledge-aware Path Recurrent Network (KPRN) to exploit knowledge graph for
recommendation. KPRN can generate path representations by composing the
semantics of both entities and relations. By leveraging the sequential
dependencies within a path, we allow effective reasoning on paths to infer the
underlying rationale of a user-item interaction. Furthermore, we design a new
weighted pooling operation to discriminate the strengths of different paths in
connecting a user with an item, endowing our model with a certain level of
explainability. We conduct extensive experiments on two datasets about movie
and music, demonstrating significant improvements over state-of-the-art
solutions Collaborative Knowledge Base Embedding and Neural Factorization
Machine.Comment: 8 pages, 5 figures, AAAI-201
Model-enhanced Contrastive Reinforcement Learning for Sequential Recommendation
Reinforcement learning (RL) has been widely applied in recommendation systems
due to its potential in optimizing the long-term engagement of users. From the
perspective of RL, recommendation can be formulated as a Markov decision
process (MDP), where recommendation system (agent) can interact with users
(environment) and acquire feedback (reward signals).However, it is impractical
to conduct online interactions with the concern on user experience and
implementation complexity, and we can only train RL recommenders with offline
datasets containing limited reward signals and state transitions. Therefore,
the data sparsity issue of reward signals and state transitions is very severe,
while it has long been overlooked by existing RL recommenders.Worse still, RL
methods learn through the trial-and-error mode, but negative feedback cannot be
obtained in implicit feedback recommendation tasks, which aggravates the
overestimation problem of offline RL recommender. To address these challenges,
we propose a novel RL recommender named model-enhanced contrastive
reinforcement learning (MCRL). On the one hand, we learn a value function to
estimate the long-term engagement of users, together with a conservative value
learning mechanism to alleviate the overestimation problem.On the other hand,
we construct some positive and negative state-action pairs to model the reward
function and state transition function with contrastive learning to exploit the
internal structure information of MDP. Experiments demonstrate that the
proposed method significantly outperforms existing offline RL and
self-supervised RL methods with different representative backbone networks on
two real-world datasets.Comment: 11 pages, 7 figure
Reformulating CTR Prediction: Learning Invariant Feature Interactions for Recommendation
Click-Through Rate (CTR) prediction plays a core role in recommender systems,
serving as the final-stage filter to rank items for a user. The key to
addressing the CTR task is learning feature interactions that are useful for
prediction, which is typically achieved by fitting historical click data with
the Empirical Risk Minimization (ERM) paradigm. Representative methods include
Factorization Machines and Deep Interest Network, which have achieved wide
success in industrial applications. However, such a manner inevitably learns
unstable feature interactions, i.e., the ones that exhibit strong correlations
in historical data but generalize poorly for future serving. In this work, we
reformulate the CTR task -- instead of pursuing ERM on historical data, we
split the historical data chronologically into several periods (a.k.a,
environments), aiming to learn feature interactions that are stable across
periods. Such feature interactions are supposed to generalize better to predict
future behavior data. Nevertheless, a technical challenge is that existing
invariant learning solutions like Invariant Risk Minimization are not
applicable, since the click data entangles both environment-invariant and
environment-specific correlations. To address this dilemma, we propose
Disentangled Invariant Learning (DIL) which disentangles feature embeddings to
capture the two types of correlations separately. To improve the modeling
efficiency, we further design LightDIL which performs the disentanglement at
the higher level of the feature field. Extensive experiments demonstrate the
effectiveness of DIL in learning stable feature interactions for CTR. We
release the code at https://github.com/zyang1580/DIL.Comment: 11 pages, 6 Postscript figures, to be published in SIGIR202
Workflow temporal verification for monitoring parallel business processes
Workflow temporal verification is conducted to guarantee on-time completion, which is one of the most important QoS (Quality of Service) dimensions for business processes running in the cloud. However, as today's business systems often need to handle a large number of concurrent customer requests, conventional response-time based process monitoring strategies conducted in a one-by-one fashion cannot be applied efficiently to a large batch of parallel processes because of significant time overhead. Similar situations may also exist in software companies where multiple software projects are carried out at the same time by software developers. To address such a problem, based on a novel runtime throughput consistency model, this paper proposes a QoS-aware throughput based checkpoint selection strategy, which can dynamically select a small number of checkpoints along the system timeline to facilitate the temporal verification of throughput constraints and achieve the target on-time completion rate. Experimental results demonstrate that our strategy can achieve the best efficiency and effectiveness compared with the state-of-the-art as and other representative response-time based checkpoint selection strategies
A novel deadline assignment strategy for a large batch of parallel tasks with soft deadlines in the cloud
Deadline assignment is to assign each subtask composing a distributed task with a local deadline such that the global deadline can be met. Today's real-time systems often need to handle hundreds or even thousands of concurrent customer (or service) requests. Therefore, deadline assignment is becoming an increasingly challenging issue with a large number of parallel and distributed subtasks. However, most conventional strategies are designed to deal with a single independent task rather than a batch of many parallel tasks in a shared resource environment such as cloud computing. To address such an issue, in this paper, instead of assigning local deadline for each subtask, we propose a novel strategy which can efficiently assign local throughput constraints for a batch of parallel tasks at any time point along the system timeline. The basis of this strategy is a novel throughput consistency model which can measure the probability of on-time completion at any given time point. The experimental results demonstrate that our strategy can achieve significant time reduction in deadline assignment and achieve the most 'consistency' between global and local deadlines compared with other representative strategies
BiRank: Towards Ranking on Bipartite Graphs
10.1109/TKDE.2016.2611584IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING29157-7
Explainable Reasoning over Knowledge Graph Paths for Recommendation
10.1609/aaai.v33i01.33015329AAAI 20195329-533