36 research outputs found
Leveraging Large Language Models for Sequential Recommendation
Sequential recommendation problems have received increasing attention in
research during the past few years, leading to the inception of a large variety
of algorithmic approaches. In this work, we explore how large language models
(LLMs), which are nowadays introducing disruptive effects in many AI-based
applications, can be used to build or improve sequential recommendation
approaches. Specifically, we devise and evaluate three approaches to leverage
the power of LLMs in different ways. Our results from experiments on two
datasets show that initializing the state-of-the-art sequential recommendation
model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20%
compared to the vanilla BERT4Rec model. Furthermore, we find that a simple
approach that leverages LLM embeddings for producing recommendations, can
provide competitive performance by highlighting semantically related items. We
publicly share the code and data of our experiments to ensure reproducibility.Comment: 9 page
STAR: A Session-Based Time-Aware Recommender System
Session-Based Recommenders (SBRs) aim to predict users' next preferences
regard to their previous interactions in sessions while there is no historical
information about them. Modern SBRs utilize deep neural networks to map users'
current interest(s) during an ongoing session to a latent space so that their
next preference can be predicted. Although state-of-art SBR models achieve
satisfactory results, most focus on studying the sequence of events inside
sessions while ignoring temporal details of those events. In this paper, we
examine the potential of session temporal information in enhancing the
performance of SBRs, conceivably by reflecting the momentary interests of
anonymous users or their mindset shifts during sessions. We propose the STAR
framework, which utilizes the time intervals between events within sessions to
construct more informative representations for items and sessions. Our
mechanism revises session representation by embedding time intervals without
employing discretization. Empirical results on Yoochoose and Diginetica
datasets show that the suggested method outperforms the state-of-the-art
baseline models in Recall and MRR criteria