4,933 research outputs found

    Revisiting Pre-Trained Models for Chinese Natural Language Processing

    Full text link
    Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models. In this paper, we target on revisiting Chinese pre-trained language models to examine their effectiveness in a non-English language and release the Chinese pre-trained language model series to the community. We also propose a simple but effective model called MacBERT, which improves upon RoBERTa in several ways, especially the masking strategy that adopts MLM as correction (Mac). We carried out extensive experiments on eight Chinese NLP tasks to revisit the existing pre-trained language models as well as the proposed MacBERT. Experimental results show that MacBERT could achieve state-of-the-art performances on many NLP tasks, and we also ablate details with several findings that may help future research. Resources available: https://github.com/ymcui/MacBERTComment: 12 pages, to appear at Findings of EMNLP 202

    A Span-Extraction Dataset for Chinese Machine Reading Comprehension

    Full text link
    Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention. However, the existing reading comprehension datasets are mostly in English. In this paper, we introduce a Span-Extraction dataset for Chinese machine reading comprehension to add language diversities in this area. The dataset is composed by near 20,000 real questions annotated on Wikipedia paragraphs by human experts. We also annotated a challenge set which contains the questions that need comprehensive understanding and multi-sentence inference throughout the context. We present several baseline systems as well as anonymous submissions for demonstrating the difficulties in this dataset. With the release of the dataset, we hosted the Second Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC 2018). We hope the release of the dataset could further accelerate the Chinese machine reading comprehension research. Resources are available: https://github.com/ymcui/cmrc2018Comment: 6 pages, accepted as a conference paper at EMNLP-IJCNLP 2019 (short paper

    Hysteresis of Electronic Transport in Graphene Transistors

    Full text link
    Graphene field effect transistors commonly comprise graphene flakes lying on SiO2 surfaces. The gate-voltage dependent conductance shows hysteresis depending on the gate sweeping rate/range. It is shown here that the transistors exhibit two different kinds of hysteresis in their electrical characteristics. Charge transfer causes a positive shift in the gate voltage of the minimum conductance, while capacitive gating can cause the negative shift of conductance with respect to gate voltage. The positive hysteretic phenomena decay with an increase of the number of layers in graphene flakes. Self-heating in helium atmosphere significantly removes adsorbates and reduces positive hysteresis. We also observed negative hysteresis in graphene devices at low temperature. It is also found that an ice layer on/under graphene has much stronger dipole moment than a water layer does. Mobile ions in the electrolyte gate and a polarity switch in the ferroelectric gate could also cause negative hysteresis in graphene transistors. These findings improved our understanding of the electrical response of graphene to its surroundings. The unique sensitivity to environment and related phenomena in graphene deserve further studies on nonvolatile memory, electrostatic detection and chemically driven applications.Comment: 13 pages, 6 Figure

    Conversational Word Embedding for Retrieval-Based Dialog System

    Full text link
    Human conversations contain many types of information, e.g., knowledge, common sense, and language habits. In this paper, we propose a conversational word embedding method named PR-Embedding, which utilizes the conversation pairs post,reply \left\langle{post, reply} \right\rangle to learn word embedding. Different from previous works, PR-Embedding uses the vectors from two different semantic spaces to represent the words in post and reply. To catch the information among the pair, we first introduce the word alignment model from statistical machine translation to generate the cross-sentence window, then train the embedding on word-level and sentence-level. We evaluate the method on single-turn and multi-turn response selection tasks for retrieval-based dialog systems. The experiment results show that PR-Embedding can improve the quality of the selected response. PR-Embedding source code is available at https://github.com/wtma/PR-EmbeddingComment: To appear at ACL 202
    corecore