133 research outputs found
A Survey on Table-and-Text HybridQA: Concepts, Methods, Challenges and Future Directions
Table-and-text hybrid question answering (HybridQA) is a widely used and
challenging NLP task commonly applied in the financial and scientific domain.
The early research focuses on migrating other QA task methods to HybridQA,
while with further research, more and more HybridQA-specific methods have been
present. With the rapid development of HybridQA, the systematic survey is still
under-explored to summarize the main techniques and advance further research.
So we present this work to summarize the current HybridQA benchmarks and
methods, then analyze the challenges and future directions of this task. The
contributions of this paper can be summarized in three folds: (1) first survey,
to our best knowledge, including benchmarks, methods and challenges for
HybridQA; (2) systematic investigation with the reasonable comparison of the
existing systems to articulate their advantages and shortcomings; (3) detailed
analysis of challenges in four important dimensions to shed light on future
directions.Comment: 7 pages, 4 figure
Revisiting Pre-Trained Models for Chinese Natural Language Processing
Bidirectional Encoder Representations from Transformers (BERT) has shown
marvelous improvements across various NLP tasks, and consecutive variants have
been proposed to further improve the performance of the pre-trained language
models. In this paper, we target on revisiting Chinese pre-trained language
models to examine their effectiveness in a non-English language and release the
Chinese pre-trained language model series to the community. We also propose a
simple but effective model called MacBERT, which improves upon RoBERTa in
several ways, especially the masking strategy that adopts MLM as correction
(Mac). We carried out extensive experiments on eight Chinese NLP tasks to
revisit the existing pre-trained language models as well as the proposed
MacBERT. Experimental results show that MacBERT could achieve state-of-the-art
performances on many NLP tasks, and we also ablate details with several
findings that may help future research. Resources available:
https://github.com/ymcui/MacBERTComment: 12 pages, to appear at Findings of EMNLP 202
- …