Sequential Recommendation (SR) has received increasing attention due to its
ability to capture user dynamic preferences. Recently, Contrastive Learning
(CL) provides an effective approach for sequential recommendation by learning
invariance from different views of an input. However, most existing data or
model augmentation methods may destroy semantic sequential interaction
characteristics and often rely on the hand-crafted property of their
contrastive view-generation strategies. In this paper, we propose a
Meta-optimized Seq2Seq Generator and Contrastive Learning (Meta-SGCL) for
sequential recommendation, which applies the meta-optimized two-step training
strategy to adaptive generate contrastive views. Specifically, Meta-SGCL first
introduces a simple yet effective augmentation method called
Sequence-to-Sequence (Seq2Seq) generator, which treats the Variational
AutoEncoders (VAE) as the view generator and can constitute contrastive views
while preserving the original sequence's semantics. Next, the model employs a
meta-optimized two-step training strategy, which aims to adaptively generate
contrastive views without relying on manually designed view-generation
techniques. Finally, we evaluate our proposed method Meta-SGCL using three
public real-world datasets. Compared with the state-of-the-art methods, our
experimental results demonstrate the effectiveness of our model and the code is
available