Session-based recommendation, which aims to predict the next item of users'
interest as per an existing sequence interaction of items, has attracted
growing applications of Contrastive Learning (CL) with improved user and item
representations. However, these contrastive objectives: (1) serve a similar
role as the cross-entropy loss while ignoring the item representation space
optimisation; and (2) commonly require complicated modelling, including complex
positive/negative sample constructions and extra data augmentation. In this
work, we introduce Self-Contrastive Learning (SCL), which simplifies the
application of CL and enhances the performance of state-of-the-art CL-based
recommendation techniques. Specifically, SCL is formulated as an objective
function that directly promotes a uniform distribution among item
representations and efficiently replaces all the existing contrastive objective
components of state-of-the-art models. Unlike previous works, SCL eliminates
the need for any positive/negative sample construction or data augmentation,
leading to enhanced interpretability of the item representation space and
facilitating its extensibility to existing recommender systems. Through
experiments on three benchmark datasets, we demonstrate that SCL consistently
improves the performance of state-of-the-art models with statistical
significance. Notably, our experiments show that SCL improves the performance
of two best-performing models by 8.2% and 9.5% in P@10 (Precision) and 9.9% and
11.2% in MRR@10 (Mean Reciprocal Rank) on average across different benchmarks.
Additionally, our analysis elucidates the improvement in terms of alignment and
uniformity of representations, as well as the effectiveness of SCL with a low
computational cost.Comment: Technical Repor