Representation modeling based on user behavior sequences is an important
direction in user cognition. In this study, we propose a novel framework called
Multi-Interest User Representation Model. Specifically, the model consists of
two sub-models. The first sub-module is used to encode user behaviors in any
period into a super-high dimensional sparse vector. Then, we design a
self-supervised network to map vectors in the first module to low-dimensional
dense user representations by contrastive learning. With the help of a novel
attention module which can learn multi-interests of user, the second sub-module
achieves almost lossless dimensionality reduction. Experiments on several
benchmark datasets show that our approach works well and outperforms
state-of-the-art unsupervised representation methods in different downstream
tasks.Comment: during peer revie