2 research outputs found

    Unsupervised Word Segmentation Without Dictionary

    Get PDF
    This prototype system demonstrates a novel method of word segmentation based on corpus statistics. Since the central technique we used is unsupervised training based on a large corpus, we refer to this approach as unsupervised word segmentation. The unsupervised approach is general in scope and can be applied to both Mandarin Chinese and Taiwanese. In this prototype, we illustrate its use in word segmentation of Taiwanese Bible written in Hanzi and Romanized characters. Basically, it involves: Computing mutual information, MI, between Hanzi and Romanized characters A and B. If A and B have a relatively high MI, we lean toward treating AB as a word. Using a greedy method to form words of 2 to 4 characters in the input sentences. Building an N-gram model from the results of first-round word segmentation Segmenting words based on the N-gram model Iterating between the above two steps: building N-gram and word segmentation Computing mutual information. Using mutual information is motivated by the observation of previous work by Hank and Church (1990) and Sproat and Shih (1990). If A and B have a relatively high MI that is over a certain threshold, we prefer to identify AB as a word over thos

    Unsupervised Neural Word Segmentation for Chinese via Segmental Language Modeling

    Full text link
    Previous traditional approaches to unsupervised Chinese word segmentation (CWS) can be roughly classified into discriminative and generative models. The former uses the carefully designed goodness measures for candidate segmentation, while the latter focuses on finding the optimal segmentation of the highest generative probability. However, while there exists a trivial way to extend the discriminative models into neural version by using neural language models, those of generative ones are non-trivial. In this paper, we propose the segmental language models (SLMs) for CWS. Our approach explicitly focuses on the segmental nature of Chinese, as well as preserves several properties of language models. In SLMs, a context encoder encodes the previous context and a segment decoder generates each segment incrementally. As far as we know, we are the first to propose a neural model for unsupervised CWS and achieve competitive performance to the state-of-the-art statistical models on four different datasets from SIGHAN 2005 bakeoff.Comment: To appear in EMNLP 201
    corecore