212 research outputs found
A Study of The Impact of Financial Development on the Country’s Monetization
Based on dynamic panel data from 45 countries, this article makes an empirical analysis of the determinants of M2/GDP ratio. It reveals that indirect financing dominated by banking system and direct financing dominated by financial markets jointly contribute to the rise of the M2/GDP ratio of a country, while the improvement of efficiency of banking industry and securities market helps reduce it. Finally it offers some suggestions on upgrading China’s financial market and structure in terms of promoting its financial efficiency, innovation and reform
Revisiting Pre-Trained Models for Chinese Natural Language Processing
Bidirectional Encoder Representations from Transformers (BERT) has shown
marvelous improvements across various NLP tasks, and consecutive variants have
been proposed to further improve the performance of the pre-trained language
models. In this paper, we target on revisiting Chinese pre-trained language
models to examine their effectiveness in a non-English language and release the
Chinese pre-trained language model series to the community. We also propose a
simple but effective model called MacBERT, which improves upon RoBERTa in
several ways, especially the masking strategy that adopts MLM as correction
(Mac). We carried out extensive experiments on eight Chinese NLP tasks to
revisit the existing pre-trained language models as well as the proposed
MacBERT. Experimental results show that MacBERT could achieve state-of-the-art
performances on many NLP tasks, and we also ablate details with several
findings that may help future research. Resources available:
https://github.com/ymcui/MacBERTComment: 12 pages, to appear at Findings of EMNLP 202
A Span-Extraction Dataset for Chinese Machine Reading Comprehension
Machine Reading Comprehension (MRC) has become enormously popular recently
and has attracted a lot of attention. However, the existing reading
comprehension datasets are mostly in English. In this paper, we introduce a
Span-Extraction dataset for Chinese machine reading comprehension to add
language diversities in this area. The dataset is composed by near 20,000 real
questions annotated on Wikipedia paragraphs by human experts. We also annotated
a challenge set which contains the questions that need comprehensive
understanding and multi-sentence inference throughout the context. We present
several baseline systems as well as anonymous submissions for demonstrating the
difficulties in this dataset. With the release of the dataset, we hosted the
Second Evaluation Workshop on Chinese Machine Reading Comprehension (CMRC
2018). We hope the release of the dataset could further accelerate the Chinese
machine reading comprehension research. Resources are available:
https://github.com/ymcui/cmrc2018Comment: 6 pages, accepted as a conference paper at EMNLP-IJCNLP 2019 (short
paper
Is Graph Structure Necessary for Multi-hop Question Answering?
Recently, attempting to model texts as graph structure and introducing graph
neural networks to deal with it has become a trend in many NLP research areas.
In this paper, we investigate whether the graph structure is necessary for
multi-hop question answering. Our analysis is centered on HotpotQA. We
construct a strong baseline model to establish that, with the proper use of
pre-trained models, graph structure may not be necessary for multi-hop question
answering. We point out that both graph structure and adjacency matrix are
task-related prior knowledge, and graph-attention can be considered as a
special case of self-attention. Experiments and visualized analysis demonstrate
that graph-attention or the entire graph structure can be replaced by
self-attention or Transformers.Comment: 6 pages, to appear at EMNLP 202
Conversational Word Embedding for Retrieval-Based Dialog System
Human conversations contain many types of information, e.g., knowledge,
common sense, and language habits. In this paper, we propose a conversational
word embedding method named PR-Embedding, which utilizes the conversation pairs
to learn word embedding. Different
from previous works, PR-Embedding uses the vectors from two different semantic
spaces to represent the words in post and reply. To catch the information among
the pair, we first introduce the word alignment model from statistical machine
translation to generate the cross-sentence window, then train the embedding on
word-level and sentence-level. We evaluate the method on single-turn and
multi-turn response selection tasks for retrieval-based dialog systems. The
experiment results show that PR-Embedding can improve the quality of the
selected response. PR-Embedding source code is available at
https://github.com/wtma/PR-EmbeddingComment: To appear at ACL 202
- …