37 research outputs found

    Soft BPR Loss for Dynamic Hard Negative Sampling in Recommender Systems

    Full text link
    In recommender systems, leveraging Graph Neural Networks (GNNs) to formulate the bipartite relation between users and items is a promising way. However, powerful negative sampling methods that is adapted to GNN-based recommenders still requires a lot of efforts. One critical gap is that it is rather tough to distinguish real negatives from massive unobserved items during hard negative sampling. Towards this problem, this paper develops a novel hard negative sampling method for GNN-based recommendation systems by simply reformulating the loss function. We conduct various experiments on three datasets, demonstrating that the method proposed outperforms a set of state-of-the-art benchmarks.Comment: 9 pages, 16 figure

    Data Upcycling Knowledge Distillation for Image Super-Resolution

    Full text link
    Knowledge distillation (KD) emerges as a challenging yet promising technique for compressing deep learning models, characterized by the transmission of extensive learning representations from proficient and computationally intensive teacher models to compact student models. However, only a handful of studies have endeavored to compress the models for single image super-resolution (SISR) through KD, with their effects on student model enhancement remaining marginal. In this paper, we put forth an approach from the perspective of efficient data utilization, namely, the Data Upcycling Knowledge Distillation (DUKD) which facilitates the student model by the prior knowledge teacher provided via upcycled in-domain data derived from their inputs. This upcycling process is realized through two efficient image zooming operations and invertible data augmentations which introduce the label consistency regularization to the field of KD for SISR and substantially boosts student model's generalization. The DUKD, due to its versatility, can be applied across a broad spectrum of teacher-student architectures. Comprehensive experiments across diverse benchmarks demonstrate that our proposed DUKD method significantly outperforms previous art, exemplified by an increase of up to 0.5dB in PSNR over baselines methods, and a 67% parameters reduced RCAN model's performance remaining on par with that of the RCAN teacher model

    On the bootstrap saddlepoint approximations

    No full text
    We compare saddlepoint approximations to the exact distributions of a studentized mean and to its bootstrap approximation. We show that, on bounded sets, these empirical saddlepoint approximations achieve second order relative errors uniformly. We also consider the relative errors for larger deviations. It follows that the studentized-t bootstrap p-value and the coverage of the bootstrap confidence interval have second order relative errors

    Joint Semantic Intelligent Detection of Vehicle Color under Rainy Conditions

    No full text
    Color is an important feature of vehicles, and it plays a key role in intelligent traffic management and criminal investigation. Existing algorithms for vehicle color recognition are typically trained on data under good weather conditions and have poor robustness for outdoor visual tasks. Fine vehicle color recognition under rainy conditions is still a challenging problem. In this paper, an algorithm for jointly deraining and recognizing vehicle color, (JADAR), is proposed, where three layers of UNet are embedded into RetinaNet-50 to obtain joint semantic fusion information. More precisely, the UNet subnet is used for deraining, and the feature maps of the recovered clean image and the extracted feature maps of the input image are cascaded into the Feature Pyramid Net (FPN) module to achieve joint semantic learning. The joint feature maps are then fed into the class and box subnets to classify and locate objects. The RainVehicleColor-24 dataset is used to train the JADAR for vehicle color recognition under rainy conditions, and extensive experiments are conducted. Since the deraining and detecting modules share the feature extraction layers, our algorithm maintains the test time of RetinaNet-50 while improving its robustness. Testing on self-built and public real datasets, the mean average precision (mAP) of vehicle color recognition reaches 72.07%, which beats both sate-of-the-art algorithms for vehicle color recognition and popular target detection algorithms

    Saddlepoint approximations to the trimmed mean

    No full text
    Saddlepoint approximations for the trimmed mean and the studentized trimmed mean are established. Some numerical evidence on the quality of our saddlepoint approximations is also included

    Stochastic regression and its application to hedging in finance

    No full text
    In this paper we investigate how to employ stochastic regression to hedge risks in finance, where the risk of a security is measured by its quadratic variation process. Mykland and Zhang used this technique to demonstrate how to reduce the risk of a given security by introducing another security. In this paper, we investigate how to further reduce the remaining unhedgable risk by adding more hedging securities. Some practical guidelines on how to choose those hedging securities in practice is also given
    corecore