662 research outputs found

    Diet Code Is Healthy: Simplifying Programs for Pre-trained Models of Code

    Full text link
    Pre-trained code representation models such as CodeBERT have demonstrated superior performance in a variety of software engineering tasks, yet they are often heavy in complexity, quadratically with the length of the input sequence. Our empirical analysis of CodeBERT's attention reveals that CodeBERT pays more attention to certain types of tokens and statements such as keywords and data-relevant statements. Based on these findings, we propose DietCode, which aims at lightweight leverage of large pre-trained models for source code. DietCode simplifies the input program of CodeBERT with three strategies, namely, word dropout, frequency filtering, and an attention-based strategy which selects statements and tokens that receive the most attention weights during pre-training. Hence, it gives a substantial reduction in the computational cost without hampering the model performance. Experimental results on two downstream tasks show that DietCodeBERT provides comparable results to CodeBERT with 40% less computational cost in fine-tuning and testing.Comment: Accepted to be published in ESEC/FSE 202

    Endoplasmic Reticulum Stress-Mediated Apoptosis Involved in Indirect Recognition Pathway Blockade Induces Long-Term Heart Allograft Survival

    Get PDF
    Implementation of dendritic cell- (DC-) based therapies in organ transplantation can reduce dependency on nonspecific immunosuppression. Despite extensive research, mechanisms of equipped DCs inducing transplant tolerance remain incomplete. Here, we applied RNA interference technique to inhibit CD80 and CD86 expression in host bone marrow-derived DCs. This approach could specifically and effectively knock down CD80 and CD86 expression. T cells primed by these DCs inhibited allogeneic responses. Administration of recipient DCs loaded with alloantigen after CD80 and CD86 blockade prolonged cardiac allograft survival. We also found a higher percentage of apoptotic T cells in lymph tissues and grafts than that detected in control group. In addition, these T cells expressed high expression of GRP78 than controls, indicating activation of unfolded protein responses. Upregulation of CHOP expression among these cells suggested that the endoplasmic reticulum stress (ERS) response switched to a proapoptotic response. Our results indicated that ERS-induced apoptosis may be involved in allogeneic T-cell apoptosis, and the ERS-mediated apoptosis pathway may be a novel target in clinical prevention and therapy of allograft rejection

    InfeRE: Step-by-Step Regex Generation via Chain of Inference

    Full text link
    Automatically generating regular expressions (abbrev. regexes) from natural language description (NL2RE) has been an emerging research area. Prior studies treat regex as a linear sequence of tokens and generate the final expressions autoregressively in a single pass. They did not take into account the step-by-step internal text-matching processes behind the final results. This significantly hinders the efficacy and interpretability of regex generation by neural language models. In this paper, we propose a new paradigm called InfeRE, which decomposes the generation of regexes into chains of step-by-step inference. To enhance the robustness, we introduce a self-consistency decoding mechanism that ensembles multiple outputs sampled from different models. We evaluate InfeRE on two publicly available datasets, NL-RX-Turk and KB13, and compare the results with state-of-the-art approaches and the popular tree-based generation approach TRANX. Experimental results show that InfeRE substantially outperforms previous baselines, yielding 16.3% and 14.7% improvement in DFA@5 accuracy on two datasets, respectively. Particularly, InfeRE outperforms the popular tree-based generation approach by 18.1% and 11.3% on both datasets, respectively, in terms of DFA@5 accuracy.Comment: This paper has been accepted by ASE'2

    DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances

    Full text link
    Recent advances in pre-trained language models have significantly improved neural response generation. However, existing methods usually view the dialogue context as a linear sequence of tokens and learn to generate the next word through token-level self-attention. Such token-level encoding hinders the exploration of discourse-level coherence among utterances. This paper presents DialogBERT, a novel conversational response generation model that enhances previous PLM-based dialogue models. DialogBERT employs a hierarchical Transformer architecture. To efficiently capture the discourse-level coherence among utterances, we propose two training objectives, including masked utterance regression and distributed utterance order ranking in analogy to the original BERT training. Experiments on three multi-turn conversation datasets show that our approach remarkably outperforms the baselines, such as BART and DialoGPT, in terms of quantitative evaluation. The human evaluation suggests that DialogBERT generates more coherent, informative, and human-like responses than the baselines with significant margins.Comment: Published as a conference paper at AAAI 202
    corecore