7,217 research outputs found
Automatic Translating Between Ancient Chinese and Contemporary Chinese with Limited Aligned Corpora
The Chinese language has evolved a lot during the long-term development.
Therefore, native speakers now have trouble in reading sentences written in
ancient Chinese. In this paper, we propose to build an end-to-end neural model
to automatically translate between ancient and contemporary Chinese. However,
the existing ancient-contemporary Chinese parallel corpora are not aligned at
the sentence level and sentence-aligned corpora are limited, which makes it
difficult to train the model. To build the sentence level parallel training
data for the model, we propose an unsupervised algorithm that constructs
sentence-aligned ancient-contemporary pairs by using the fact that the aligned
sentence pair shares many of the tokens. Based on the aligned corpus, we
propose an end-to-end neural model with copying mechanism and local attention
to translate between ancient and contemporary Chinese. Experiments show that
the proposed unsupervised algorithm achieves 99.4% F1 score for sentence
alignment, and the translation model achieves 26.95 BLEU from ancient to
contemporary, and 36.34 BLEU from contemporary to ancient.Comment: Acceptted by NLPCC 201
GujiBERT and GujiGPT: Construction of Intelligent Information Processing Foundation Language Models for Ancient Texts
In the context of the rapid development of large language models, we have
meticulously trained and introduced the GujiBERT and GujiGPT language models,
which are foundational models specifically designed for intelligent information
processing of ancient texts. These models have been trained on an extensive
dataset that encompasses both simplified and traditional Chinese characters,
allowing them to effectively handle various natural language processing tasks
related to ancient books, including but not limited to automatic sentence
segmentation, punctuation, word segmentation, part-of-speech tagging, entity
recognition, and automatic translation. Notably, these models have exhibited
exceptional performance across a range of validation tasks using publicly
available datasets. Our research findings highlight the efficacy of employing
self-supervised methods to further train the models using classical text
corpora, thus enhancing their capability to tackle downstream tasks. Moreover,
it is worth emphasizing that the choice of font, the scale of the corpus, and
the initial model selection all exert significant influence over the ultimate
experimental outcomes. To cater to the diverse text processing preferences of
researchers in digital humanities and linguistics, we have developed three
distinct categories comprising a total of nine model variations. We believe
that by sharing these foundational language models specialized in the domain of
ancient texts, we can facilitate the intelligent processing and scholarly
exploration of ancient literary works and, consequently, contribute to the
global dissemination of China's rich and esteemed traditional culture in this
new era.Comment: 22pages,0 figur
Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language Modelling
Modeling discourse -- the linguistic phenomena that go beyond individual
sentences, is a fundamental yet challenging aspect of natural language
processing (NLP). However, existing evaluation benchmarks primarily focus on
the evaluation of inter-sentence properties and overlook critical discourse
phenomena that cross sentences. To bridge the gap, we propose Disco-Bench, a
benchmark that can evaluate intra-sentence discourse properties across a
diverse set of NLP tasks, covering understanding, translation, and generation.
Disco-Bench consists of 9 document-level testsets in the literature domain,
which contain rich discourse phenomena (e.g. cohesion and coherence) in Chinese
and/or English. For linguistic analysis, we also design a diagnostic test suite
that can examine whether the target models learn discourse knowledge. We
totally evaluate 20 general-, in-domain and commercial models based on
Transformer, advanced pretraining architectures and large language models
(LLMs). Our results show (1) the challenge and necessity of our evaluation
benchmark; (2) fine-grained pretraining based on literary document-level
training data consistently improves the modeling of discourse information. We
will release the datasets, pretrained models, and leaderboard, which we hope
can significantly facilitate research in this field:
https://github.com/longyuewangdcu/Disco-Bench.Comment: Zhaopeng Tu is the corresponding autho
- …