3,390 research outputs found

    A neural joint model for Vietnamese word segmentation, POS tagging and dependency parsing

    Get PDF
    We propose the first multi-task learning model for joint Vietnamese word segmentation, part-of-speech (POS) tagging and dependency parsing. In particular, our model extends the BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) with BiLSTM-CRF-based neural layers (Huang et al., 2015) for word segmentation and POS tagging. On Vietnamese benchmark datasets, experimental results show that our joint model obtains state-of-the-art or competitive performances.Comment: In Proceedings of the 17th Annual Workshop of the Australasian Language Technology Association (ALTA 2019

    An improved neural network model for joint POS tagging and dependency parsing

    Full text link
    We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 "big" Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Strakov\'a, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications. Our code is available together with all pre-trained models at: https://github.com/datquocnguyen/jPTDPComment: 11 pages; In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, to appea

    The Cult of Ho Chi Minh: Commemoration and Contestation

    Get PDF
    Ho Chi Minh, the “father of modern Viet Nam,” remains a powerful figure in contemporary Vietnamese politics and culture. Since his death in 1969, the Vietnamese Communist Party has constructed a state cult surrounding his image. The construction of the Ho Chi Minh memorial complex in Hanoi, the propagation of Ho Chi Minh’s teachings, and the state commemorative rituals for Uncle Ho contribute to his continuous presence. The state cult posits Ho Chi Minh not only as the “father figure” to whom Vietnamese people pay respect and tribute, but also as the moral compass by which the people orient themselves socially and culturally. The state cult, however, is continuously contested. On the one hand, meanings attributed to the state commemoration of Ho Chi Minh are changing temporally and regionally. On the other hand, development of various religious cults of Uncle Ho challenges the Party’s hegemonic interpretation of the image of Ho Chi Minh. Drawing from historical research and short-term fieldwork, this paper discusses various modes of commemorative rituals dedicated to Ho Chi Minh, and explores how they contribute to the cult of Ho Chi Minh as a contested field of knowledge, where political, cultural, and personal meanings are constantly negotiated. Particular attention is paid to how Vietnamese people, both in Vietnam and abroad, perform, construct, and challenge the discourses surrounding the cult, as well as to how the Party and the state respond to these voices of discordance

    A Mixture Model for Learning Multi-Sense Word Embeddings

    Full text link
    Word embeddings are now a standard technique for inducing meaning representations for words. For getting good representations, it is important to take into account different senses of a word. In this paper, we propose a mixture model for learning multi-sense word embeddings. Our model generalizes the previous works in that it allows to induce different weights of different senses of a word. The experimental results show that our model outperforms previous models on standard evaluation tasks.Comment: *SEM 201
    corecore