124 research outputs found

    Contrastive Graph Prompt-tuning for Cross-domain Recommendation

    Full text link
    Recommender systems are frequently challenged by the data sparsity problem. One approach to mitigate this issue is through cross-domain recommendation techniques. In a cross-domain context, sharing knowledge between domains can enhance the effectiveness in the target domain. Recent cross-domain methods have employed a pre-training approach, but we argue that these methods often result in suboptimal fine-tuning, especially with large neural models. Modern language models utilize prompts for efficient model tuning. Such prompts act as a tunable latent vector, allowing for the freezing of the main model parameters. In our research, we introduce the Personalised Graph Prompt-based Recommendation (PGPRec) framework. This leverages the advantages of prompt-tuning. Within this framework, we formulate personalized graph prompts item-wise, rooted in items that a user has previously engaged with. Specifically, we employ Contrastive Learning (CL) to produce pre-trained embeddings that offer greater generalizability in the pre-training phase, ensuring robust training during the tuning phase. Our evaluation of PGPRec in cross-domain scenarios involves comprehensive testing with the top-k recommendation tasks and a cold-start analysis. Our empirical findings, based on four Amazon Review datasets, reveal that the PGPRec framework can decrease the tuned parameters by as much as 74%, maintaining competitive performance. Remarkably, there's an 11.41% enhancement in performance against the leading baseline in cold-start situations

    AlphaVC: A Reinforcement Learning-based Venture Capital Investment Strategy

    Get PDF
    Venture capital investments play a powerful role in fueling the emergence and growth of early-stage startups. However, only a small fraction of venture-backed startups can survive and exit successfully. Prior data-driven prediction-based or recommendation-based solutions are incapable of providing effective and actionable strategies on proper investment timing and amounts for startups across different investment rounds. In this paper, we develop a novel reinforcement learning-based method, AlphaVC, to facilitate venture capitalists’ decision-making. Our policy-based reinforcement learning agents can dynamically identify the best candidates and sequentially place the optimal investment amounts at proper rounds to maximize financial returns for a given portfolio. We retrieve company demographics and investment activity data from Crunchbase. Our methodology demonstrates its efficacy and superiority in ranking and portfolio-based performance metrics in comparison with various state-of-the-art baseline methods. Through sensitivity and ablation analyses, our research highlights the significance of factoring in the distal outcome and acknowledging the learning effect when making decisions at different time points. Additionally, we observe that AlphaVC concentrates on a select number of high-potential companies, but distributes investments evenly across various stages of the investment process

    Hypersonic Vehicles Profile-Following Based on LQR Design Using Time-Varying Weighting Matrices

    Get PDF
    In the process of applying linear quadratic regulator (LQR) to solve aerial vehicle reentry reference trajectory guidance, to obtain better profile-following performance, the parameters of the aerial vehicle system can be used to calculate weighting matrices according to the Bryson principle. However, the traditional method is not applicable to various disturbances in hypersonic vehicles (HSV) which have particular dynamic characteristics. By calculating the weighting matrices constructed based on Bryson principle using time-varying parameters, a novel time-varying LQR design method is proposed to deal with the various disturbances in HSV reentry profile-following. Different from the previous approaches, the current states of the flight system are employed to calculate the parameters in weighting matrices. Simulation results are given to demonstrate that using the proposed approach in this chapter, performance of HSV profile-following can be improved significantly, and stronger robustness against different disturbances can be obtained

    mc-BEiT: Multi-choice Discretization for Image BERT Pre-training

    Full text link
    Image BERT pre-training with masked image modeling (MIM) becomes a popular practice to cope with self-supervised representation learning. A seminal work, BEiT, casts MIM as a classification task with a visual vocabulary, tokenizing the continuous visual signals into discrete vision tokens using a pre-learned dVAE. Despite a feasible solution, the improper discretization hinders further improvements of image pre-training. Since image discretization has no ground-truth answers, we believe that the masked patch should not be assigned with a unique token id even if a better tokenizer can be obtained. In this work, we introduce an improved BERT-style image pre-training method, namely mc-BEiT, which performs MIM proxy tasks towards eased and refined multi-choice training objectives. Specifically, the multi-choice supervision for the masked image patches is formed by the soft probability vectors of the discrete token ids, which are predicted by the off-the-shelf image tokenizer and further refined by high-level inter-patch perceptions resorting to the observation that similar patches should share their choices. Extensive experiments on classification, segmentation, and detection tasks demonstrate the superiority of our method, e.g., the pre-trained ViT-B achieves 84.1% top-1 fine-tuning accuracy on ImageNet-1K classification, 50.8% mIOU on ADE20K semantic segmentation, 51.2% AP^b and 44.3% AP^m of object detection and instance segmentation on COCO, outperforming the competitive counterparts

    Contrastive graph prompt-tuning for cross-domain recommendation

    Get PDF
    Recommender systems commonly suffer from the long-standing data sparsity problem where insufficient user-item interaction data limits the systems’ ability to make accurate recommendations. This problem can be alleviated using cross-domain recommendation techniques. In particular, in a cross-domain setting, knowledge sharing between domains permits improved effectiveness on the target domain. While recent cross-domain recommendation techniques used a pre-training configuration, we argue that such techniques lead to a low fine-tuning efficiency, especially when using large neural models. In recent language models, prompts have been used for parameter-efficient and time-efficient tuning of the models on the downstream tasks - these prompts represent a tunable latent vector that permits to freeze the rest of the language model’s parameters. To address the cross-domain recommendation task in an efficient manner, we propose a novel Personalised Graph Prompt-based Recommendation (PGPRec) framework, which leverages the efficiency benefits from prompt-tuning. In such a framework, we develop personalised and item-wise graph prompts based on relevant items to those items the user has interacted with. In particular, we apply Contrastive Learning (CL) to generate the pre-trained embeddings, to allow an increased generalisability in the pre-training stage and to ensure an effective prompt-tuning stage. To evaluate the effectiveness of our PGPRec framework in a cross-domain setting, we conduct an extensive evaluation with the top-k recommendation task and perform a cold-start analysis. The obtained empirical results on four Amazon Review datasets show that our proposed PGPRec framework can reduce up to 74% of the tuned parameters with a competitive performance and achieves an 11.41% improved performance compared to the strongest baseline in a cold-start scenario

    Identify the radiotherapy-induced abnormal changes in the patients with nasopharyngeal carcinoma

    Get PDF
    Radiotherapy (RT) is the standard treatment for nasopharyngeal carcinoma, which often causes inevitable brain injury in the process of treatment. The majority of patients has no abnormal signal or density change of the conventional magnetic resonance imaging (MRI) and computed tomography (CT) examination in the long-term follow-up after radiation therapy. However, when there is a visible CT and conventional MR imaging changes, the damage often has been severe and lack of effective treatments, seriously influencing the prognosis of patients. Therefore, the present study aimed to investigate the abnormal changes in nasopharyngeal carcinoma (NPC) patients after RT. In the present study, we exploited the machine learning framework which contained two parts: feature extraction and classification to automatically detect the brain injury. Our results showed that the method could effectively identify the abnormal regions reduced by radiotherapy. The highest classification accuracy was 82.5 % in the abnormal brain regions. The parahippocampal gyrus was the highest accuracy region, which suggested that the parahippocampal gyrus could be most sensitive to radiotherapy and involved in the pathogenesis of radiotherapy-induced brain injury in NPC patients

    Multi-modal Graph Contrastive Learning for Micro-video Recommendation

    Get PDF
    Recently micro-videos have become more popular in social media platforms such as TikTok and Instagram. Engagements in these platforms are facilitated by multi-modal recommendation systems. Indeed, such multimedia content can involve diverse modalities, often represented as visual, acoustic, and textual features to the recommender model. Existing works in micro-video recommendation tend to unify the multi-modal channels, thereby treating each modality with equal importance. However, we argue that these approaches are not sufficient to encode item representations with multiple modalities, since the used methods cannot fully disentangle the users' tastes on different modalities. To tackle this problem, we propose a novel learning method named Multi-Modal Graph Contrastive Learning (MMGCL), which aims to explicitly enhance multi-modal representation learning in a self-supervised learning manner. In particular, we devise two augmentation techniques to generate the multiple views of a user/item: modality edge dropout and modality masking. Furthermore, we introduce a novel negative sampling technique that allows to learn the correlation between modalities and ensures the effective contribution of each modality. Extensive experiments conducted on two micro-video datasets demonstrate the superiority of our proposed MMGCL method over existing state-of-the-art approaches in terms of both recommendation performance and training convergence speed
    • 

    corecore