409 research outputs found

    Enhancing Transformers without Self-supervised Learning: A Loss Landscape Perspective in Sequential Recommendation

    Full text link
    Transformer and its variants are a powerful class of architectures for sequential recommendation, owing to their ability of capturing a user's dynamic interests from their past interactions. Despite their success, Transformer-based models often require the optimization of a large number of parameters, making them difficult to train from sparse data in sequential recommendation. To address the problem of data sparsity, previous studies have utilized self-supervised learning to enhance Transformers, such as pre-training embeddings from item attributes or contrastive data augmentations. However, these approaches encounter several training issues, including initialization sensitivity, manual data augmentations, and large batch-size memory bottlenecks. In this work, we investigate Transformers from the perspective of loss geometry, aiming to enhance the models' data efficiency and generalization in sequential recommendation. We observe that Transformers (e.g., SASRec) can converge to extremely sharp local minima if not adequately regularized. Inspired by the recent Sharpness-Aware Minimization (SAM), we propose SAMRec, which significantly improves the accuracy and robustness of sequential recommendation. SAMRec performs comparably to state-of-the-art self-supervised Transformers, such as S3^3Rec and CL4SRec, without the need for pre-training or strong data augmentations

    TinyKG: Memory-Efficient Training Framework for Knowledge Graph Neural Recommender Systems

    Full text link
    There has been an explosion of interest in designing various Knowledge Graph Neural Networks (KGNNs), which achieve state-of-the-art performance and provide great explainability for recommendation. The promising performance is mainly resulting from their capability of capturing high-order proximity messages over the knowledge graphs. However, training KGNNs at scale is challenging due to the high memory usage. In the forward pass, the automatic differentiation engines (\textsl{e.g.}, TensorFlow/PyTorch) generally need to cache all intermediate activation maps in order to compute gradients in the backward pass, which leads to a large GPU memory footprint. Existing work solves this problem by utilizing multi-GPU distributed frameworks. Nonetheless, this poses a practical challenge when seeking to deploy KGNNs in memory-constrained environments, especially for industry-scale graphs. Here we present TinyKG, a memory-efficient GPU-based training framework for KGNNs for the tasks of recommendation. Specifically, TinyKG uses exact activations in the forward pass while storing a quantized version of activations in the GPU buffers. During the backward pass, these low-precision activations are dequantized back to full-precision tensors, in order to compute gradients. To reduce the quantization errors, TinyKG applies a simple yet effective quantization algorithm to compress the activations, which ensures unbiasedness with low variance. As such, the training memory footprint of KGNNs is largely reduced with negligible accuracy loss. To evaluate the performance of our TinyKG, we conduct comprehensive experiments on real-world datasets. We found that our TinyKG with INT2 quantization aggressively reduces the memory footprint of activation maps with 7×7 \times, only with 2%2\% loss in accuracy, allowing us to deploy KGNNs on memory-constrained devices

    Functional pathway mapping analysis for hypoxia-inducible factors

    Get PDF
    Background: Hypoxia-inducible factors (HIFs) are transcription factors that play a crucial role in response to hypoxic stress in living organisms. The HIF pathway is activated by changes in cellular oxygen levels and has significant impacts on the regulation of gene expression patterns in cancer cells. Identifying functional conservation across species and discovering conserved regulatory motifs can facilitate the selection of reference species for empirical tests. This paper describes a cross-species functional pathway mapping strategy based on evidence of homologous relationships that employs matrix-based searching techniques for identifying transcription factorbinding sites on all retrieved HIF target genes. Results: HIF-related orthologous and paralogous genes were mapped onto the conserved pathways to indicate functional conservation across species. Quantitatively measured HIF pathways are depicted in order to illustrate the extent of functional conservation. The results show that in spite of the evolutionary process of speciation, distantly related species may exhibit functional conservation owing to conservative pathways. The novel terms OrthRate and ParaRate are proposed to quantitatively indicate the flexibility of a homologous pathway and reveal the alternative regulation of functional genes. Conclusion: The developed functional pathway mapping strategy provides a bioinformatics approach for constructing biological pathways by highlighting the homologous relationships between various model species. The mapped HIF pathways were quantitatively illustrated and evaluated by statistically analyzing their conserved transcription factor-binding elements

    Hessian-aware Quantized Node Embeddings for Recommendation

    Full text link
    Graph Neural Networks (GNNs) have achieved state-of-the-art performance in recommender systems. Nevertheless, the process of searching and ranking from a large item corpus usually requires high latency, which limits the widespread deployment of GNNs in industry-scale applications. To address this issue, many methods compress user/item representations into the binary embedding space to reduce space requirements and accelerate inference. Also, they use the Straight-through Estimator (STE) to prevent vanishing gradients during back-propagation. However, the STE often causes the gradient mismatch problem, leading to sub-optimal results. In this work, we present the Hessian-aware Quantized GNN (HQ-GNN) as an effective solution for discrete representations of users/items that enable fast retrieval. HQ-GNN is composed of two components: a GNN encoder for learning continuous node embeddings and a quantized module for compressing full-precision embeddings into low-bit ones. Consequently, HQ-GNN benefits from both lower memory requirements and faster inference speeds compared to vanilla GNNs. To address the gradient mismatch problem in STE, we further consider the quantized errors and its second-order derivatives for better stability. The experimental results on several large-scale datasets show that HQ-GNN achieves a good balance between latency and performance

    Sharpness-Aware Graph Collaborative Filtering

    Full text link
    Graph Neural Networks (GNNs) have achieved impressive performance in collaborative filtering. However, GNNs tend to yield inferior performance when the distributions of training and test data are not aligned well. Also, training GNNs requires optimizing non-convex neural networks with an abundance of local and global minima, which may differ widely in their performance at test time. Thus, it is essential to choose the minima carefully. Here we propose an effective training schema, called {gSAM}, under the principle that the \textit{flatter} minima has a better generalization ability than the \textit{sharper} ones. To achieve this goal, gSAM regularizes the flatness of the weight loss landscape by forming a bi-level optimization: the outer problem conducts the standard model training while the inner problem helps the model jump out of the sharp minima. Experimental results show the superiority of our gSAM

    Puncture resistance and mechanical properties of graphene oxide reinforced natural rubber latex

    Get PDF
    Natural rubber (NR) latex gloves are widely used as a very important barrier for healthcare workers. However, they can still be perforated easily by sharp devices and instruments. The aim of this study was to investigate the effect of the addition of graphene oxide (GO) to low-ammonia NR latex on its puncture resistance, mechanical properties and thermal stability. GO was synthesized using modified Hummers’ reaction. The produced GO was mixed into the NR latex solution at various doses (0.01-1.0 wt. %), followed by a coagulant dipping process using ceramic plates to produce film samples. Puncture resistance was enhanced by 12% with 1.0 wt. % GO/NR. Also, the incorporation of GO improved the stress at 300% and 500%, the modulus at 300% and 500% and the tear strength of low-ammonia NR latex films

    Interaction between Red Yeast Rice and CYP450 Enzymes/P-Glycoprotein and Its Implication for the Clinical Pharmacokinetics of Lovastatin

    Get PDF
    Red yeast rice (RYR) can reduce cholesterol through its active component, lovastatin. This study was to investigate the pharmacokinetic properties of lovastatin in RYR products and potential RYR-drug interactions. Extracts of three registered RYR products (LipoCol Forte, Cholestin, and Xuezhikang) were more effective than pure lovastatin in inhibiting the activities of cytochrome P450 enzymes and P-glycoprotein. Among CYP450 enzymes, RYR showed the highest inhibition on CYP1A2 and CYP2C19, with comparable inhibitory potencies to the corresponding typical inhibitors. In healthy volunteers taking the RYR product LipoCol Forte, the pharmacokinetic properties of lovastatin and lovastatin acid were linear in the dose range of 1 to 4 capsules taken as a single dose and no significant accumulation was observed after multiple dosing. Concomitant use of one LipoCol Forte capsule with nifedipine did not change the pharmacokinetics of nifedipine. Yet, concomitant use of gemfibrozil with LipoCol Forte resulted in a significant increase in the plasma concentration of lovastatin acid. These findings suggest that the use of RYR products may not have effects on the pharmacokinetics of concomitant comedications despite their effects to inhibit the activities of CYP450 enzymes and P-gp, whereas gemfibrozil affects the pharmacokinetics of lovastatin acid when used concomitantly with RYR products
    • …
    corecore