251 research outputs found

    Motivation To Play Esports: Case of League of Legends

    Get PDF
    The population of playing electronic sports has increased recently, and the most popular one is League of Legends (LoL). As a multiplayer online battle arena video game, it’s not only a game, but also a competitive electronic sport. The purpose of this study was to assess the motivations of playing League of Legends and to relate them by genders, age groups and frequency groups. The final sample comprised 111 LoL players. The study categorized 12 items into three factors: achievement, socialization and immersion. Results indicated that achievement factors were stronger motives for men than women. For different age groups, there was no significant difference on socialization factors. The immersion factors for players who spent different times on LoL were not very influential

    A Reliable Web Services Selection Method for Concurrent Requests

    Get PDF
    Current methods of service selection based on quality of service (QoS) usually focus on a single service request at a time, or let the users in a waiting queue wait for Web services when the same functional Web service has more than one requests, and then choose the Web service with the best QoS for the current request according to its own needs. However, there are multiple service requests for the same functional web service at a time in practice and we cannot choose the best service for users every time because of the service’s load. This paper aims at solving the Web Services selection for concurrent requests and developing a global optimal selection method for multiple similar service requesters to optimize the system resources. It proposes the improved social cognitive (ISCO) algorithm which uses genetic algorithm for observational learning and uses deviating degree to evaluate the solution. Furthermore, to enhance the efficiency of ISCO, the elite strategy is used in ISCO algorithm. We evaluate performance of the ISCO algorithm and the selection method through simulations. The simulation results demonstrate that the ISCO is valid for optimization problems with discrete data and more effective than ACO and GA

    Wakening Past Concepts without Past Data: Class-Incremental Learning from Online Placebos

    Full text link
    Not forgetting old class knowledge is a key challenge for class-incremental learning (CIL) when the model continuously adapts to new classes. A common technique to address this is knowledge distillation (KD), which penalizes prediction inconsistencies between old and new models. Such prediction is made with almost new class data, as old class data is extremely scarce due to the strict memory limitation in CIL. In this paper, we take a deep dive into KD losses and find that "using new class data for KD" not only hinders the model adaption (for learning new classes) but also results in low efficiency for preserving old class knowledge. We address this by "using the placebos of old classes for KD", where the placebos are chosen from a free image stream, such as Google Images, in an automatical and economical fashion. To this end, we train an online placebo selection policy to quickly evaluate the quality of streaming images (good or bad placebos) and use only good ones for one-time feed-forward computation of KD. We formulate the policy training process as an online Markov Decision Process (MDP), and introduce an online learning algorithm to solve this MDP problem without causing much computation costs. In experiments, we show that our method 1) is surprisingly effective even when there is no class overlap between placebos and original old class data, 2) does not require any additional supervision or memory budget, and 3) significantly outperforms a number of top-performing CIL methods, in particular when using lower memory budgets for old class exemplars, e.g., five exemplars per class.Comment: Accepted to WACV 2024. Code: https://github.com/yaoyao-liu/online-placebo

    Class-Incremental Exemplar Compression for Class-Incremental Learning

    Full text link
    Exemplar-based class-incremental learning (CIL) finetunes the model with all samples of new classes but few-shot exemplars of old classes in each incremental phase, where the "few-shot" abides by the limited memory budget. In this paper, we break this "few-shot" limit based on a simple yet surprisingly effective idea: compressing exemplars by downsampling non-discriminative pixels and saving "many-shot" compressed exemplars in the memory. Without needing any manual annotation, we achieve this compression by generating 0-1 masks on discriminative pixels from class activation maps (CAM). We propose an adaptive mask generation model called class-incremental masking (CIM) to explicitly resolve two difficulties of using CAM: 1) transforming the heatmaps of CAM to 0-1 masks with an arbitrary threshold leads to a trade-off between the coverage on discriminative pixels and the quantity of exemplars, as the total memory is fixed; and 2) optimal thresholds vary for different object classes, which is particularly obvious in the dynamic environment of CIL. We optimize the CIM model alternatively with the conventional CIL model through a bilevel optimization problem. We conduct extensive experiments on high-resolution CIL benchmarks including Food-101, ImageNet-100, and ImageNet-1000, and show that using the compressed exemplars by CIM can achieve a new state-of-the-art CIL accuracy, e.g., 4.8 percentage points higher than FOSTER on 10-Phase ImageNet-1000. Our code is available at https://github.com/xfflzl/CIM-CIL.Comment: Accepted to CVPR 202

    Mnemonics training: Multi-class incremental learning without forgetting

    Get PDF
    Multi-Class Incremental Learning (MCIL) aims to learn new concepts by incrementally updating a model trained on previous concepts. However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones. To alleviate this issue, it has been proposed to keep around a few examples of the previous concepts but the effectiveness of this approach heavily depends on the representativeness of these examples. This paper proposes a novel and automatic framework we call mnemonics, where we parameterize exemplars and make them optimizable in an end-to-end manner. We train the framework through bilevel optimizations, i.e., model-level and exemplar-level. We conduct extensive experiments on three MCIL benchmarks, CIFAR-100, ImageNet-Subset and ImageNet, and show that using mnemonics exemplars can surpass the state-of-the-art by a large margin. Interestingly and quite intriguingly, the mnemonics exemplars tend to be on the boundaries between different classes.Comment: Experiment results updated (different from the conference version). Code is available at https://github.com/yaoyao-liu/mnemonics-trainin

    Meta-transfer learning through hard tasks

    Get PDF

    Gear Health Monitoring and RUL Prediction Based on MSB Analysis

    Get PDF

    Surgical treatment of the osteoporotic spine with bone cement-injectable cannulated pedicle screw fixation: technical description and preliminary application in 43 patients

    Get PDF
    OBJECTIVES: To describe a new approach for the application of polymethylmethacrylate augmentation of bone cement-injectable cannulated pedicle screws. METHODS: Between June 2010 and February 2013, 43 patients with degenerative spinal disease and osteoporosis (T-scor

    Meta-transfer learning for few-shot learning

    Get PDF
    10.1109/CVPR.2019.00049CVPR 2019403-41
    corecore