2 research outputs found
Memory Augmented Neural Model for Incremental Session-based Recommendation
Increasing concerns with privacy have stimulated interests in Session-based
Recommendation (SR) using no personal data other than what is observed in the
current browser session. Existing methods are evaluated in static settings
which rarely occur in real-world applications. To better address the dynamic
nature of SR tasks, we study an incremental SR scenario, where new items and
preferences appear continuously. We show that existing neural recommenders can
be used in incremental SR scenarios with small incremental updates to alleviate
computation overhead and catastrophic forgetting. More importantly, we propose
a general framework called Memory Augmented Neural model (MAN). MAN augments a
base neural recommender with a continuously queried and updated nonparametric
memory, and the predictions from the neural and the memory components are
combined through another lightweight gating network. We empirically show that
MAN is well-suited for the incremental SR task, and it consistently outperforms
state-of-the-art neural and nonparametric methods. We analyze the results and
demonstrate that it is particularly good at incrementally learning preferences
on new and infrequent items.Comment: Accepted as a full paper at IJCAI 202
ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation
Session-based recommendation has received growing attention recently due to
the increasing privacy concern. Despite the recent success of neural
session-based recommenders, they are typically developed in an offline manner
using a static dataset. However, recommendation requires continual adaptation
to take into account new and obsolete items and users, and requires "continual
learning" in real-life applications. In this case, the recommender is updated
continually and periodically with new data that arrives in each update cycle,
and the updated model needs to provide recommendations for user activities
before the next model update. A major challenge for continual learning with
neural models is catastrophic forgetting, in which a continually trained model
forgets user preference patterns it has learned before. To deal with this
challenge, we propose a method called Adaptively Distilled Exemplar Replay
(ADER) by periodically replaying previous training samples (i.e., exemplars) to
the current model with an adaptive distillation loss. Experiments are conducted
based on the state-of-the-art SASRec model using two widely used datasets to
benchmark ADER with several well-known continual learning techniques. We
empirically demonstrate that ADER consistently outperforms other baselines, and
it even outperforms the method using all historical data at every update cycle.
This result reveals that ADER is a promising solution to mitigate the
catastrophic forgetting issue towards building more realistic and scalable
session-based recommenders.Comment: Accepted at RecSys 202