2,379 research outputs found
Chinese named entity recognition using lexicalized HMMs
This paper presents a lexicalized HMM-based approach to Chinese named entity recognition (NER). To tackle the problem of unknown words, we unify unknown word identification and NER as a single tagging task on a sequence of known words. To do this, we first employ a known-word bigram-based model to segment a sentence into a sequence of known words, and then apply the uniformly lexicalized HMMs to assign each known word a proper hybrid tag that indicates its pattern in forming an entity and the category of the formed entity. Our system is able to integrate both the internal formation patterns and the surrounding contextual clues for NER under the framework of HMMs. As a result, the performance of the system can be improved without losing its efficiency in training and tagging. We have tested our system using different public corpora. The results show that lexicalized HMMs can substantially improve NER performance over standard HMMs. The results also indicate that character-based tagging (viz. the tagging based on pure single-character words) is comparable to and can even outperform the relevant known-word based tagging when a lexicalization technique is applied.postprin
Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling
Boundary information is critical for various Chinese language processing
tasks, such as word segmentation, part-of-speech tagging, and named entity
recognition. Previous studies usually resorted to the use of a high-quality
external lexicon, where lexicon items can offer explicit boundary information.
However, to ensure the quality of the lexicon, great human effort is always
necessary, which has been generally ignored. In this work, we suggest
unsupervised statistical boundary information instead, and propose an
architecture to encode the information directly into pre-trained language
models, resulting in Boundary-Aware BERT (BABERT). We apply BABERT for feature
induction of Chinese sequence labeling tasks. Experimental results on ten
benchmarks of Chinese sequence labeling demonstrate that BABERT can provide
consistent improvements on all datasets. In addition, our method can complement
previous supervised lexicon exploration, where further improvements can be
achieved when integrated with external lexicon information.Comment: 12 pages, 2 figures, 7 tables, EMNLP 202
Which Is Essential for Chinese Word Segmentation: Character versus Word
PACLIC 20 / Wuhan, China / 1-3 November, 200
Neural Chinese Word Segmentation with Lexicon and Unlabeled Data via Posterior Regularization
Existing methods for CWS usually rely on a large number of labeled sentences
to train word segmentation models, which are expensive and time-consuming to
annotate. Luckily, the unlabeled data is usually easy to collect and many
high-quality Chinese lexicons are off-the-shelf, both of which can provide
useful information for CWS. In this paper, we propose a neural approach for
Chinese word segmentation which can exploit both lexicon and unlabeled data.
Our approach is based on a variant of posterior regularization algorithm, and
the unlabeled data and lexicon are incorporated into model training as indirect
supervision by regularizing the prediction space of CWS models. Extensive
experiments on multiple benchmark datasets in both in-domain and cross-domain
scenarios validate the effectiveness of our approach.Comment: 7 pages, 11 figures, accepted by the 2019 World Wide Web Conference
(WWW '19
Natural Language Processing Using Neighbour Entropy-based Segmentation
In natural language processing (NLP) of Chinese hazard text collected in the process of hazard identification, Chinese word segmentation (CWS) is the first step to extracting meaningful information from such semi-structured Chinese texts. This paper proposes a new neighbor entropy-based segmentation (NES) model for CWS. The model considers the segmentation benefits of neighbor entropies, adopting the concept of "neighbor" in optimization research. It is defined by the benefit ratio of text segmentation, including benefits and losses of combining the segmentation unit with more information than other popular statistical models. In the experiments performed, together with the maximum-based segmentation algorithm, the NES model achieves a 99.3% precision, 98.7% recall, and 99.0% f-measure for text segmentation; these performances are higher than those of existing tools based on other seven popular statistical models. Results show that the NES model is a valid CWS, especially for text segmentation requirements necessitating longer-sized characters. The text corpus used comes from the Beijing Municipal Administration of Work Safety, which was recorded in the fourth quarter of 2018
- …