1,275 research outputs found

    Intent Contrastive Learning with Cross Subsequences for Sequential Recommendation

    Full text link
    The user purchase behaviors are mainly influenced by their intentions (e.g., buying clothes for decoration, buying brushes for painting, etc.). Modeling a user's latent intention can significantly improve the performance of recommendations. Previous works model users' intentions by considering the predefined label in auxiliary information or introducing stochastic data augmentation to learn purposes in the latent space. However, the auxiliary information is sparse and not always available for recommender systems, and introducing stochastic data augmentation may introduce noise and thus change the intentions hidden in the sequence. Therefore, leveraging user intentions for sequential recommendation (SR) can be challenging because they are frequently varied and unobserved. In this paper, Intent contrastive learning with Cross Subsequences for sequential Recommendation (ICSRec) is proposed to model users' latent intentions. Specifically, ICSRec first segments a user's sequential behaviors into multiple subsequences by using a dynamic sliding operation and takes these subsequences into the encoder to generate the representations for the user's intentions. To tackle the problem of no explicit labels for purposes, ICSRec assumes different subsequences with the same target item may represent the same intention and proposes a coarse-grain intent contrastive learning to push these subsequences closer. Then, fine-grain intent contrastive learning is mentioned to capture the fine-grain intentions of subsequences in sequential behaviors. Extensive experiments conducted on four real-world datasets demonstrate the superior performance of the proposed ICSRec model compared with baseline methods.Comment: 10pages, 5figures, WSDM2024. arXiv admin note: text overlap with arXiv:2304.0776

    Quaternion-Based Graph Convolution Network for Recommendation

    Full text link
    Graph Convolution Network (GCN) has been widely applied in recommender systems for its representation learning capability on user and item embeddings. However, GCN is vulnerable to noisy and incomplete graphs, which are common in real world, due to its recursive message propagation mechanism. In the literature, some work propose to remove the feature transformation during message propagation, but making it unable to effectively capture the graph structural features. Moreover, they model users and items in the Euclidean space, which has been demonstrated to have high distortion when modeling complex graphs, further degrading the capability to capture the graph structural features and leading to sub-optimal performance. To this end, in this paper, we propose a simple yet effective Quaternion-based Graph Convolution Network (QGCN) recommendation model. In the proposed model, we utilize the hyper-complex Quaternion space to learn user and item representations and feature transformation to improve both performance and robustness. Specifically, we first embed all users and items into the Quaternion space. Then, we introduce the quaternion embedding propagation layers with quaternion feature transformation to perform message propagation. Finally, we combine the embeddings generated at each layer with the mean pooling strategy to obtain the final embeddings for recommendation. Extensive experiments on three public benchmark datasets demonstrate that our proposed QGCN model outperforms baseline methods by a large margin.Comment: 13 pages, 7 figures, 6 tables. Submitted to ICDE 202

    Meta-optimized Joint Generative and Contrastive Learning for Sequential Recommendation

    Full text link
    Sequential Recommendation (SR) has received increasing attention due to its ability to capture user dynamic preferences. Recently, Contrastive Learning (CL) provides an effective approach for sequential recommendation by learning invariance from different views of an input. However, most existing data or model augmentation methods may destroy semantic sequential interaction characteristics and often rely on the hand-crafted property of their contrastive view-generation strategies. In this paper, we propose a Meta-optimized Seq2Seq Generator and Contrastive Learning (Meta-SGCL) for sequential recommendation, which applies the meta-optimized two-step training strategy to adaptive generate contrastive views. Specifically, Meta-SGCL first introduces a simple yet effective augmentation method called Sequence-to-Sequence (Seq2Seq) generator, which treats the Variational AutoEncoders (VAE) as the view generator and can constitute contrastive views while preserving the original sequence's semantics. Next, the model employs a meta-optimized two-step training strategy, which aims to adaptively generate contrastive views without relying on manually designed view-generation techniques. Finally, we evaluate our proposed method Meta-SGCL using three public real-world datasets. Compared with the state-of-the-art methods, our experimental results demonstrate the effectiveness of our model and the code is available

    Federated Learning of Large Language Models with Parameter-Efficient Prompt Tuning and Adaptive Optimization

    Full text link
    Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data. However, the training process of Large Language Models (LLMs) generally incurs the update of significant parameters, which limits the applicability of FL techniques to tackle the LLMs in real scenarios. Prompt tuning can significantly reduce the number of parameters to update, but it either incurs performance degradation or low training efficiency. The straightforward utilization of prompt tuning in the FL often raises non-trivial communication costs and dramatically degrades performance. In addition, the decentralized data is generally non-Independent and Identically Distributed (non-IID), which brings client drift problems and thus poor performance. This paper proposes a Parameter-efficient prompt Tuning approach with Adaptive Optimization, i.e., FedPepTAO, to enable efficient and effective FL of LLMs. First, an efficient partial prompt tuning approach is proposed to improve performance and efficiency simultaneously. Second, a novel adaptive optimization method is developed to address the client drift problems on both the device and server sides to enhance performance further. Extensive experiments based on 10 datasets demonstrate the superb performance (up to 60.8\% in terms of accuracy) and efficiency (up to 97.59\% in terms of training time) of FedPepTAO compared with 9 baseline approaches. Our code is available at https://github.com/llm-eff/FedPepTAO.Comment: 18 pages, accepted by EMNLP 202
    • …
    corecore