188 research outputs found

    Analyzing the impact of storage shortage on data availability in decentralized online social networks

    Get PDF
    Maintaining data availability is one of the biggest challenges in decentralized online social networks (DOSNs). The existing work often assumes that the friends of a user can always contribute to the sufficient storage capacity to store all data. However, this assumption is not always true in today’s online social networks (OSNs) due to the fact that nowadays the users often use the smart mobile devices to access the OSNs. The limitation of the storage capacity in mobile devices may jeopardize the data availability. Therefore, it is desired to know the relation between the storage capacity contributed by the OSN users and the level of data availability that the OSNs can achieve. This paper addresses this issue. In this paper, the data availability model over storage capacity is established. Further, a novel method is proposed to predict the data availability on the fly. Extensive simulation experiments have been conducted to evaluate the effectiveness of the data availability model and the on-the-fly prediction

    In-situ structural identification of Zr3Al2 type metastable phase during crystallization of a Zr-based MG

    Get PDF
    A metastable phase was detected using higher energy synchrotron radiation when Zr-based metallic glass (MG) was annealed under vacuum in Linkam hot stage at 848 K. The formation and transformation processes of metastable phase were recorded by synchrotron radiation method. The metastable phase during crystallization was identified as Zr3Al2 structure type according to powder diffraction and TEM analysis. The structure of Zr3Al2 type MCP was experimentally evidenced by 3D diffraction patterns and mathematically described. The identification of Zr3Al2 MCP could be helpful for the understanding of cluster structure of MG

    One Adapter for All Programming Languages? Adapter Tuning for Code Search and Summarization

    Full text link
    As pre-trained models automate many code intelligence tasks, a widely used paradigm is to fine-tune a model on the task dataset for each programming language. A recent study reported that multilingual fine-tuning benefits a range of tasks and models. However, we find that multilingual fine-tuning leads to performance degradation on recent models UniXcoder and CodeT5. To alleviate the potentially catastrophic forgetting issue in multilingual models, we fix all pre-trained model parameters, insert the parameter-efficient structure adapter, and fine-tune it. Updating only 0.6\% of the overall parameters compared to full-model fine-tuning for each programming language, adapter tuning yields consistent improvements on code search and summarization tasks, achieving state-of-the-art results. In addition, we experimentally show its effectiveness in cross-lingual and low-resource scenarios. Multilingual fine-tuning with 200 samples per programming language approaches the results fine-tuned with the entire dataset on code summarization. Our experiments on three probing tasks show that adapter tuning significantly outperforms full-model fine-tuning and effectively overcomes catastrophic forgetting.Comment: Accepted to the 45th International Conference on Software Engineering (ICSE 2023

    Learning to Branch in Combinatorial Optimization with Graph Pointer Networks

    Full text link
    Branch-and-bound is a typical way to solve combinatorial optimization problems. This paper proposes a graph pointer network model for learning the variable selection policy in the branch-and-bound. We extract the graph features, global features and historical features to represent the solver state. The proposed model, which combines the graph neural network and the pointer mechanism, can effectively map from the solver state to the branching variable decisions. The model is trained to imitate the classic strong branching expert rule by a designed top-k Kullback-Leibler divergence loss function. Experiments on a series of benchmark problems demonstrate that the proposed approach significantly outperforms the widely used expert-designed branching rules. Our approach also outperforms the state-of-the-art machine-learning-based branch-and-bound methods in terms of solving speed and search tree size on all the test instances. In addition, the model can generalize to unseen instances and scale to larger instances

    Review of Knowledge-Enhanced Pre-trained Language Models

    Get PDF
    The knowledge-enhanced pre-trained language models attempt to use the structured knowledge stored in the knowledge graph to strengthen the pre-trained language models, so that they can learn not only the general semantic knowledge from the free text, but also the factual entity knowledge behind the text. In this way, the enhanced models can effectively solve downstream knowledge-driven tasks. Although this is a promising research direction, the current works are still in the exploratory stage, and there is no comprehensive summary and systematic arrangement. This paper aims to address the lack of comprehensive reviews of this direction. To this end, on the basis of summarizing and sorting out a large number of relevant works, this paper firstly explains the background information from three aspects: the reasons, the advantages, and the difficulties of introducing knowledge, summarizes the basic concepts involved in the knowledge-enhanced pre-trained language models. Then, it discusses three types of knowledge enhancement methods: using knowledge to expand input features, using knowledge to modify model architecture, and using knowledge to constrain training tasks. Finally, it counts the scores of various knowledge enhanced pre-trained language models on several evaluation tasks, analyzes the performance, the current challenges, and possible future directions of knowledge-enhanced pre-trained language models
    • …
    corecore