131 research outputs found

    Measuring the Eccentricity of Items

    Full text link
    The long-tail phenomenon tells us that there are many items in the tail. However, not all tail items are the same. Each item acquires different kinds of users. Some items are loved by the general public, while some items are consumed by eccentric fans. In this paper, we propose a novel metric, item eccentricity, to incorporate this difference between consumers of the items. Eccentric items are defined as items that are consumed by eccentric users. We used this metric to analyze two real-world datasets of music and movies and observed the characteristics of items in terms of eccentricity. The results showed that our defined eccentricity of an item does not change much over time, and classified eccentric and noneccentric items present significantly distinct characteristics. The proposed metric effectively separates the eccentric and noneccentric items mixed in the tail, which could not be done with the previous measures, which only consider the popularity of items.Comment: Accepted at IEEE International Conference on Systems, Man, and Cybernetics (SMC) 201

    Interpretable Prototype-based Graph Information Bottleneck

    Full text link
    The success of Graph Neural Networks (GNNs) has led to a need for understanding their decision-making process and providing explanations for their predictions, which has given rise to explainable AI (XAI) that offers transparent explanations for black-box models. Recently, the use of prototypes has successfully improved the explainability of models by learning prototypes to imply training graphs that affect the prediction. However, these approaches tend to provide prototypes with excessive information from the entire graph, leading to the exclusion of key substructures or the inclusion of irrelevant substructures, which can limit both the interpretability and the performance of the model in downstream tasks. In this work, we propose a novel framework of explainable GNNs, called interpretable Prototype-based Graph Information Bottleneck (PGIB) that incorporates prototype learning within the information bottleneck framework to provide prototypes with the key subgraph from the input graph that is important for the model prediction. This is the first work that incorporates prototype learning into the process of identifying the key subgraphs that have a critical impact on the prediction performance. Extensive experiments, including qualitative analysis, demonstrate that PGIB outperforms state-of-the-art methods in terms of both prediction performance and explainability.Comment: NeurIPS 202

    Review-Electro-Kinetic Decontamination of Radioactive Concrete Waste from Nuclear Power Plants

    Get PDF
    Electro-kinetic decontamination has been studied for radioactive concrete of nuclear power plants because of its effective removal of contaminants from deep inside concrete. Although many experiments have been conducted, a systematic comparison has been scarcely conducted. By a thorough review, this study reveals how different conditions of electro-kinetic decontamination changes the decontamination ratio and rate of Cs and Co. The tested conditions include cell configurations (i.e., geometry of concrete waste, electrode materials, and volume of solutions) and operating conditions (i.e., types and concentrations of solutions, electric field, and test duration). The careful analysis suggests the important roles of pH in electrolytic solution, electric field, and pre-treatment. We also discuss the chemical conditions under which the decontamination of Cs and Co was optimized in the presence of an applied voltage. In addition, we critically review the conditions of simulated concrete samples in the previous experiments in comparison with actual nuclear plant data

    Click-aware purchase prediction with push at the top

    Full text link
    Eliciting user preferences from purchase records for performing purchase prediction is challenging because negative feedback is not explicitly observed, and because treating all non-purchased items equally as negative feedback is unrealistic. Therefore, in this study, we present a framework that leverages the past click records of users to compensate for the missing user-item interactions of purchase records, i.e., non-purchased items. We begin by formulating various model assumptions, each one assuming a different order of user preferences among purchased, clicked-but-not-purchased, and non-clicked items, to study the usefulness of leveraging click records. We implement the model assumptions using the Bayesian personalized ranking model, which maximizes the area under the curve for bipartite ranking. However, we argue that using click records for bipartite ranking needs a meticulously designed model because of the relative unreliableness of click records compared with that of purchase records. Therefore, we ultimately propose a novel learning-to-rank method, called P3Stop, for performing purchase prediction. The proposed model is customized to be robust to relatively unreliable click records by particularly focusing on the accuracy of top-ranked items. Experimental results on two real-world e-commerce datasets demonstrate that P3STop considerably outperforms the state-of-the-art implicit-feedback-based recommendation methods, especially for top-ranked items.Comment: For the final published journal version, see https://doi.org/10.1016/j.ins.2020.02.06

    S-Mixup: Structural Mixup for Graph Neural Networks

    Full text link
    Existing studies for applying the mixup technique on graphs mainly focus on graph classification tasks, while the research in node classification is still under-explored. In this paper, we propose a novel mixup augmentation for node classification called Structural Mixup (S-Mixup). The core idea is to take into account the structural information while mixing nodes. Specifically, S-Mixup obtains pseudo-labels for unlabeled nodes in a graph along with their prediction confidence via a Graph Neural Network (GNN) classifier. These serve as the criteria for the composition of the mixup pool for both inter and intra-class mixups. Furthermore, we utilize the edge gradient obtained from the GNN training and propose a gradient-based edge selection strategy for selecting edges to be attached to the nodes generated by the mixup. Through extensive experiments on real-world benchmark datasets, we demonstrate the effectiveness of S-Mixup evaluated on the node classification task. We observe that S-Mixup enhances the robustness and generalization performance of GNNs, especially in heterophilous situations. The source code of S-Mixup can be found at \url{https://github.com/SukwonYun/S-Mixup}Comment: CIKM 2023 (Short Paper

    Task Relation-aware Continual User Representation Learning

    Full text link
    User modeling, which learns to represent users into a low-dimensional representation space based on their past behaviors, got a surge of interest from the industry for providing personalized services to users. Previous efforts in user modeling mainly focus on learning a task-specific user representation that is designed for a single task. However, since learning task-specific user representations for every task is infeasible, recent studies introduce the concept of universal user representation, which is a more generalized representation of a user that is relevant to a variety of tasks. Despite their effectiveness, existing approaches for learning universal user representations are impractical in real-world applications due to the data requirement, catastrophic forgetting and the limited learning capability for continually added tasks. In this paper, we propose a novel continual user representation learning method, called TERACON, whose learning capability is not limited as the number of learned tasks increases while capturing the relationship between the tasks. The main idea is to introduce an embedding for each task, i.e., task embedding, which is utilized to generate task-specific soft masks that not only allow the entire model parameters to be updated until the end of training sequence, but also facilitate the relationship between the tasks to be captured. Moreover, we introduce a novel knowledge retention module with pseudo-labeling strategy that successfully alleviates the long-standing problem of continual learning, i.e., catastrophic forgetting. Extensive experiments on public and proprietary real-world datasets demonstrate the superiority and practicality of TERACON. Our code is available at https://github.com/Sein-Kim/TERACON.Comment: KDD 202

    Workload-Aware Scheduling using Markov Decision Process for Infrastructure-Assisted Learning-Based Multi-UAV Surveillance Networks

    Full text link
    In modern networking research, infrastructure-assisted unmanned autonomous vehicles (UAVs) are actively considered for real-time learning-based surveillance and aerial data-delivery under unexpected 3D free mobility and coordination. In this system model, it is essential to consider the power limitation in UAVs and autonomous object recognition (for abnormal behavior detection) deep learning performance in infrastructure/towers. To overcome the power limitation of UAVs, this paper proposes a novel aerial scheduling algorithm between multi-UAVs and multi-towers where the towers conduct wireless power transfer toward UAVs. In addition, to take care of the high-performance learning model training in towers, we also propose a data delivery scheme which makes UAVs deliver the training data to the towers fairly to prevent problems due to data imbalance (e.g., huge computation overhead caused by larger data delivery or overfitting from less data delivery). Therefore, this paper proposes a novel workload-aware scheduling algorithm between multi-towers and multi-UAVs for joint power-charging from towers to their associated UAVs and training data delivery from UAVs to their associated towers. To compute the workload-aware optimal scheduling decisions in each unit time, our solution approach for the given scheduling problem is designed based on Markov decision process (MDP) to deal with (i) time-varying low-complexity computation and (ii) pseudo-polynomial optimality. As shown in performance evaluation results, our proposed algorithm ensures (i) sufficient times for resource exchanges between towers and UAVs, (ii) the most even and uniform data collection during the processes compared to the other algorithms, and (iii) the performance of all towers convergence to optimal levels.Comment: 15 pages, 10 figure
    corecore