Many modern sequential recommender systems use deep neural networks, which
can effectively estimate the relevance of items but require a lot of time to
train. Slow training increases expenses, hinders product development timescales
and prevents the model from being regularly updated to adapt to changing user
preferences. Training such sequential models involves appropriately sampling
past user interactions to create a realistic training objective. The existing
training objectives have limitations. For instance, next item prediction never
uses the beginning of the sequence as a learning target, thereby potentially
discarding valuable data. On the other hand, the item masking used by BERT4Rec
is only weakly related to the goal of the sequential recommendation; therefore,
it requires much more time to obtain an effective model. Hence, we propose a
novel Recency-based Sampling of Sequences training objective that addresses
both limitations. We apply our method to various recent and state-of-the-art
model architectures - such as GRU4Rec, Caser, and SASRec. We show that the
models enhanced with our method can achieve performances exceeding or very
close to stateof-the-art BERT4Rec, but with much less training time.Comment: This full research paper is accepted at 16th ACM Conference on
Recommender Systems (ACM RecSys