36 research outputs found

    Methane as an effective hydrogen source for single-layer graphene synthesis on Cu foil by plasma enhanced chemical vapor deposition

    Full text link
    A single-layer graphene is synthesized on Cu foil in the absence of H2 flow by plasma enhanced chemical vapor deposition (PECVD). In lieu of an explicit H2 flow, hydrogen species are produced during methane decomposition process into their active species (CHx<4), assisted by the plasma. Notably, the early stage of growth depends strongly on the plasma power. The resulting grain size (the nucleation density) has a maximum (minimum) at 50 W and saturates when the plasma power is higher than 120 W because hydrogen partial pressures are effectively tuned by a simple control of the plasma power. Raman spectroscopy and transport measurements show that decomposed methane alone can provide sufficient amount of hydrogen species for high-quality graphene synthesis by PECVD.Comment: 22 pages, 6 figure

    Who Speaks Like a Style of Vitamin:Towards Syntax-Aware Dialogue Summarization Using Multi-Task Learning

    No full text
    Abstractive dialogue summarization is a challenging task for several reasons. First, most of the important pieces of information in a conversation are scattered across utterances through multi-party interactions with different textual styles. Second, dialogues are often informal structures, wherein different individuals express personal perspectives, unlike text summarization, tasks that usually target formal documents such as news articles. To address these issues, we focused on the association between utterances from individual speakers and unique syntactic structures. Speakers have unique textual styles that can contain linguistic information, such as voiceprint. Therefore, we constructed a syntax-aware model by leveraging linguistic information (i.e., POS tagging), which alleviates the above issues by inherently distinguishing sentences uttered from individual speakers. We employed multi-task learning of both syntax-aware information and dialogue summarization. To the best of our knowledge, our approach is the first method to apply multi-task learning to the dialogue summarization task. Experiments on a SAMSum corpus (a large-scale dialogue summarization corpus) demonstrated that our method improved upon the vanilla model. We further analyze the costs and benefits of our approach relative to baseline models.Comment: This work has been accepted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models

    No full text
    Language model pretraining is an effective method for improving the performance of downstream natural language processing tasks. Even though language modeling is unsupervised and thus collecting data for it is relatively less expensive, it is still a challenging process for languages with limited resources. This results in great technological disparity between high- and low-resource languages for numerous downstream natural language processing tasks. In this paper, we aim to make this technology more accessible by enabling data efficient training of pretrained language models. It is achieved by formulating language modeling of low-resource languages as a domain adaptation task using transformer-based language models pretrained on corpora of high-resource languages. Our novel cross-lingual post-training approach selectively reuses parameters of the language model trained on a high-resource language and post-trains them while learning language-specific parameters in the low-resource language. We also propose implicit translation layers that can learn linguistic differences between languages at a sequence level. To evaluate our method, we post-train a RoBERTa model pretrained in English and conduct a case study for the Korean language. Quantitative results from intrinsic and extrinsic evaluations show that our method outperforms several massively multilingual and monolingual pretrained language models in most settings and improves the data efficiency by a factor of up to 32 compared to monolingual training

    Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models

    No full text
    Language model pretraining is an effective method for improving the performance of downstream natural language processing tasks. Even though language modeling is unsupervised and thus collecting data for it is relatively less expensive, it is still a challenging process for languages with limited resources. This results in great technological disparity between high- and low-resource languages for numerous downstream natural language processing tasks. In this paper, we aim to make this technology more accessible by enabling data efficient training of pretrained language models. It is achieved by formulating language modeling of low-resource languages as a domain adaptation task using transformer-based language models pretrained on corpora of high-resource languages. Our novel cross-lingual post-training approach selectively reuses parameters of the language model trained on a high-resource language and post-trains them while learning language-specific parameters in the low-resource language. We also propose implicit translation layers that can learn linguistic differences between languages at a sequence level. To evaluate our method, we post-train a RoBERTa model pretrained in English and conduct a case study for the Korean language. Quantitative results from intrinsic and extrinsic evaluations show that our method outperforms several massively multilingual and monolingual pretrained language models in most settings and improves the data efficiency by a factor of up to 32 compared to monolingual training

    High Wave Predictive Numerical Simulation for the Wave Alarm System

    No full text

    A Novel T Cell-Engaging Bispecific Antibody for Treating Mesothelin-Positive Solid Tumors

    No full text
    As mesothelin is overexpressed in various types of cancer, it is an attractive target for therapeutic antibodies. T-cell bispecific antibodies bind to target cells and engage T cells via binding to CD3, resulting in target cell killing by T-cell activation. However, the affinity of the CD3-binding arm may influence CD3-mediated plasma clearance or antibody trapping in T-cell-containing tissues. This may then affect the biodistribution of bispecific antibodies. In this study, we used scFab and knob-into-hole technologies to construct novel IgG-based 1 + 1 MG1122-A and 2 + 1 MG1122-B bispecific antibodies against mesothelin and CD3&epsilon;. MG1122-B was designed to be bivalent to mesothelin and monovalent to CD3&epsilon;, using a 2 + 1 head-to-tail format. Activities of the two antibodies were evaluated in mesothelin-positive tumor cells in vitro and xenograft models in vivo. Although both antibodies exhibited target cell killing efficacy and produced regression of xenograft tumors with CD8+ T-cell infiltration, the antitumor efficacy of MG1122-B was significantly higher. MG1122-B may improve tumor targeting because of its bivalency for tumor antigen. It may also reduce systemic toxicity by limiting the activation of circulating T cells. Thus, MG1122-B may be useful for treating mesothelin-positive solid tumors
    corecore