20 research outputs found

    KOSAC: A Full-fledged Korean Sentiment Analysis Corpus

    Get PDF

    Statistical models for case ambiguity resolution in Korean

    Get PDF

    Overview of the SPMRL 2013 shared task: cross-framework evaluation of parsing morphologically rich languages

    Get PDF
    This paper reports on the first shared task on statistical parsing of morphologically rich languages (MRLs). The task features data sets from nine languages, each available both in constituency and dependency annotation. We report on the preparation of the data sets, on the proposed parsing scenarios, and on the evaluation metrics for parsing MRLs given different representation types. We present and analyze parsing results obtained by the task participants, and then provide an analysis and comparison of the parsers across languages and frameworks, reported for gold input as well as more realistic parsing scenarios

    A syntactic component for Vietnamese language processing

    Full text link

    Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morphologically Rich Languages

    Get PDF
    International audienceThis paper reports on the first shared task on statistical parsing of morphologically rich lan- guages (MRLs). The task features data sets from nine languages, each available both in constituency and dependency annotation. We report on the preparation of the data sets, on the proposed parsing scenarios, and on the eval- uation metrics for parsing MRLs given dif- ferent representation types. We present and analyze parsing results obtained by the task participants, and then provide an analysis and comparison of the parsers across languages and frameworks, reported for gold input as well as more realistic parsing scenarios

    Pop culturally motivated lexical borrowing: Use of Korean in an English-majority fan forum

    Get PDF
    I denne oppgaven ser jeg pรฅ bruken av popkulturelt motiverte lรฅnord fra koreansk i et engelsk-sprรฅklig nettsamfunn for fans av koreansk pop-musikk. Data er hentet fra subredditen /r/kpop og blir analysert kvantitativt og kvalitativt. Frekvensanalyse viser at de mest lรฅnte ordene er slektskapsterminologi, men bruken pรฅ koreansk samsvarer ikke med bruken pรฅ engelsk. Lรฅningen foregรฅr pรฅ tvers av to skriftsystemer sรฅ koreansk skrevet med hangeul romaniseres for รฅ tilpasses det latinske skriftsystemet, men fรธlger ingen etablerte romaniseringssystemer. De romaniserte lรฅnordene er integrert i den engelsksprรฅklige rammen og brukes produktivt og kreativt til รฅ produsere sprรฅklige nydannelser bรฅde som lรฅnord og hybridkonstruksjoner. Bruken av lรฅnord samsvarer kun delvis med bruken pรฅ koreansk og denne bruken er i flere tilfeller unik for r/kpop som en markรธr pรฅ gruppetilhรธrighet. Sprรฅklig lek, satire og sarkasme brukes for รฅ distansere brukerne fra andre K-pop fans utenfor r/kpop. Oppgaven er ment som en utforskende studie av et kontemporรฆrt fenomen som tidligere ikke har blitt undersรธkt i et lingvistisk perspektiv.Lingvistikk mastergradsoppgaveMAHF-LINGLING35

    Predicting Linguistic Structure with Incomplete and Cross-Lingual Supervision

    Get PDF
    Contemporary approaches to natural language processing are predominantly based on statistical machine learning from large amounts of text, which has been manually annotated with the linguistic structure of interest. However, such complete supervision is currently only available for the world's major languages, in a limited number of domains and for a limited range of tasks. As an alternative, this dissertation considers methods for linguistic structure prediction that can make use of incomplete and cross-lingual supervision, with the prospect of making linguistic processing tools more widely available at a lower cost. An overarching theme of this work is the use of structured discriminative latent variable models for learning with indirect and ambiguous supervision; as instantiated, these models admit rich model features while retaining efficient learning and inference properties. The first contribution to this end is a latent-variable model for fine-grained sentiment analysis with coarse-grained indirect supervision. The second is a model for cross-lingual word-cluster induction and the application thereof to cross-lingual model transfer. The third is a method for adapting multi-source discriminative cross-lingual transfer models to target languages, by means of typologically informed selective parameter sharing. The fourth is an ambiguity-aware self- and ensemble-training algorithm, which is applied to target language adaptation and relexicalization of delexicalized cross-lingual transfer parsers. The fifth is a set of sequence-labeling models that combine constraints at the level of tokens and types, and an instantiation of these models for part-of-speech tagging with incomplete cross-lingual and crowdsourced supervision. In addition to these contributions, comprehensive overviews are provided of structured prediction with no or incomplete supervision, as well as of learning in the multilingual and cross-lingual settings. Through careful empirical evaluation, it is established that the proposed methods can be used to create substantially more accurate tools for linguistic processing, compared to both unsupervised methods and to recently proposed cross-lingual methods. The empirical support for this claim is particularly strong in the latter case; our models for syntactic dependency parsing and part-of-speech tagging achieve the hitherto best published results for a wide number of target languages, in the setting where no annotated training data is available in the target language

    ํ•œ๊ตญ์–ด ์‚ฌ์ „ํ•™์Šต๋ชจ๋ธ ๊ตฌ์ถ•๊ณผ ํ™•์žฅ ์—ฐ๊ตฌ: ๊ฐ์ •๋ถ„์„์„ ์ค‘์‹ฌ์œผ๋กœ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ธ๋ฌธ๋Œ€ํ•™ ์–ธ์–ดํ•™๊ณผ, 2021. 2. ์‹ ํšจํ•„.Recently, as interest in the Bidirectional Encoder Representations from Transformers (BERT) model has increased, many studies have also been actively conducted in Natural Language Processing based on the model. Such sentence-level contextualized embedding models are generally known to capture and model lexical, syntactic, and semantic information in sentences during training. Therefore, such models, including ELMo, GPT, and BERT, function as a universal model that can impressively perform a wide range of NLP tasks. This study proposes a monolingual BERT model trained based on Korean texts. The first released BERT model that can handle the Korean language was Google Researchโ€™s multilingual BERT (M-BERT), which was constructed with training data and a vocabulary composed of 104 languages, including Korean and English, and can handle the text of any language contained in the single model. However, despite the advantages of multilingualism, this model does not fully reflect each languageโ€™s characteristics, so that its text processing performance in each language is lower than that of a monolingual model. While mitigating those shortcomings, we built monolingual models using the training data and a vocabulary organized to better capture Korean textsโ€™ linguistic knowledge. Therefore, in this study, a model named KR-BERT was built using training data composed of Korean Wikipedia text and news articles, and was released through GitHub so that it could be used for processing Korean texts. Additionally, we trained a KR-BERT-MEDIUM model based on expanded data by adding comments and legal texts to the training data of KR-BERT. Each model used a list of tokens composed mainly of Hangul characters as its vocabulary, organized using WordPiece algorithms based on the corresponding training data. These models reported competent performances in various Korean NLP tasks such as Named Entity Recognition, Question Answering, Semantic Textual Similarity, and Sentiment Analysis. In addition, we added sentiment features to the BERT model to specialize it to better function in sentiment analysis. We constructed a sentiment-combined model including sentiment features, where the features consist of polarity and intensity values assigned to each token in the training data corresponding to that of Korean Sentiment Analysis Corpus (KOSAC). The sentiment features assigned to each token compose polarity and intensity embeddings and are infused to the basic BERT input embeddings. The sentiment-combined model is constructed by training the BERT model with these embeddings. We trained a model named KR-BERT-KOSAC that contains sentiment features while maintaining the same training data, vocabulary, and model configurations as KR-BERT and distributed it through GitHub. Then we analyzed the effects of using sentiment features in comparison to KR-BERT by observing their performance in language modeling during the training process and sentiment analysis tasks. Additionally, we determined how much each of the polarity and intensity features contributes to improving the model performance by separately organizing a model that utilizes each of the features, respectively. We obtained some increase in language modeling and sentiment analysis performances by using both the sentiment features, compared to other models with different feature composition. Here, we included the problems of binary positivity classification of movie reviews and hate speech detection on offensive comments as the sentiment analysis tasks. On the other hand, training these embedding models requires a lot of training time and hardware resources. Therefore, this study proposes a simple model fusing method that requires relatively little time. We trained a smaller-scaled sentiment-combined model consisting of a smaller number of encoder layers and attention heads and smaller hidden sizes for a few steps, combining it with an existing pre-trained BERT model. Since those pre-trained models are expected to function universally to handle various NLP problems based on good language modeling, this combination will allow two models with different advantages to interact and have better text processing capabilities. In this study, experiments on sentiment analysis problems have confirmed that combining the two models is efficient in training time and usage of hardware resources, while it can produce more accurate predictions than single models that do not include sentiment features.์ตœ๊ทผ ํŠธ๋žœ์Šคํฌ๋จธ ์–‘๋ฐฉํ–ฅ ์ธ์ฝ”๋” ํ‘œํ˜„ (Bidirectional Encoder Representations from Transformers, BERT) ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ๋†’์•„์ง€๋ฉด์„œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ถ„์•ผ์—์„œ ์ด์— ๊ธฐ๋ฐ˜ํ•œ ์—ฐ๊ตฌ ์—ญ์‹œ ํ™œ๋ฐœํžˆ ์ด๋ฃจ์–ด์ง€๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์žฅ ๋‹จ์œ„์˜ ์ž„๋ฒ ๋”ฉ์„ ์œ„ํ•œ ๋ชจ๋ธ๋“ค์€ ๋ณดํ†ต ํ•™์Šต ๊ณผ์ •์—์„œ ๋ฌธ์žฅ ๋‚ด ์–ดํœ˜, ํ†ต์‚ฌ, ์˜๋ฏธ ์ •๋ณด๋ฅผ ํฌ์ฐฉํ•˜์—ฌ ๋ชจ๋ธ๋งํ•œ๋‹ค๊ณ  ์•Œ๋ ค์ ธ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ELMo, GPT, BERT ๋“ฑ์€ ๊ทธ ์ž์ฒด๊ฐ€ ๋‹ค์–‘ํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋Š” ๋ณดํŽธ์ ์ธ ๋ชจ๋ธ๋กœ์„œ ๊ธฐ๋Šฅํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ํ•œ๊ตญ์–ด ์ž๋ฃŒ๋กœ ํ•™์Šตํ•œ ๋‹จ์ผ ์–ธ์–ด BERT ๋ชจ๋ธ์„ ์ œ์•ˆํ•œ๋‹ค. ๊ฐ€์žฅ ๋จผ์ € ๊ณต๊ฐœ๋œ ํ•œ๊ตญ์–ด๋ฅผ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” BERT ๋ชจ๋ธ์€ Google Research์˜ multilingual BERT (M-BERT)์˜€๋‹ค. ์ด๋Š” ํ•œ๊ตญ์–ด์™€ ์˜์–ด๋ฅผ ํฌํ•จํ•˜์—ฌ 104๊ฐœ ์–ธ์–ด๋กœ ๊ตฌ์„ฑ๋œ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ์–ดํœ˜ ๋ชฉ๋ก์„ ๊ฐ€์ง€๊ณ  ํ•™์Šตํ•œ ๋ชจ๋ธ์ด๋ฉฐ, ๋ชจ๋ธ ํ•˜๋‚˜๋กœ ํฌํ•จ๋œ ๋ชจ๋“  ์–ธ์–ด์˜ ํ…์ŠคํŠธ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Š” ๊ทธ ๋‹ค์ค‘์–ธ์–ด์„ฑ์ด ๊ฐ–๋Š” ์žฅ์ ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ๊ฐ ์–ธ์–ด์˜ ํŠน์„ฑ์„ ์ถฉ๋ถ„ํžˆ ๋ฐ˜์˜ํ•˜์ง€ ๋ชปํ•˜์—ฌ ๋‹จ์ผ ์–ธ์–ด ๋ชจ๋ธ๋ณด๋‹ค ๊ฐ ์–ธ์–ด์˜ ํ…์ŠคํŠธ ์ฒ˜๋ฆฌ ์„ฑ๋Šฅ์ด ๋‚ฎ๋‹ค๋Š” ๋‹จ์ ์„ ๋ณด์ธ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ๊ทธ๋Ÿฌํ•œ ๋‹จ์ ๋“ค์„ ์™„ํ™”ํ•˜๋ฉด์„œ ํ…์ŠคํŠธ์— ํฌํ•จ๋˜์–ด ์žˆ๋Š” ์–ธ์–ด ์ •๋ณด๋ฅผ ๋ณด๋‹ค ์ž˜ ํฌ์ฐฉํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ตฌ์„ฑ๋œ ๋ฐ์ดํ„ฐ์™€ ์–ดํœ˜ ๋ชฉ๋ก์„ ์ด์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํ•œ๊ตญ์–ด Wikipedia ํ…์ŠคํŠธ์™€ ๋‰ด์Šค ๊ธฐ์‚ฌ๋กœ ๊ตฌ์„ฑ๋œ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์—ฌ KR-BERT ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ณ , ์ด๋ฅผ GitHub์„ ํ†ตํ•ด ๊ณต๊ฐœํ•˜์—ฌ ํ•œ๊ตญ์–ด ์ •๋ณด์ฒ˜๋ฆฌ๋ฅผ ์œ„ํ•ด ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋„๋ก ํ•˜์˜€๋‹ค. ๋˜ํ•œ ํ•ด๋‹น ํ•™์Šต ๋ฐ์ดํ„ฐ์— ๋Œ“๊ธ€ ๋ฐ์ดํ„ฐ์™€ ๋ฒ•์กฐ๋ฌธ๊ณผ ํŒ๊ฒฐ๋ฌธ์„ ๋ง๋ถ™์—ฌ ํ™•์žฅํ•œ ํ…์ŠคํŠธ์— ๊ธฐ๋ฐ˜ํ•ด์„œ ๋‹ค์‹œ KR-BERT-MEDIUM ๋ชจ๋ธ์„ ํ•™์Šตํ•˜์˜€๋‹ค. ์ด ๋ชจ๋ธ์€ ํ•ด๋‹น ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ๋ถ€ํ„ฐ WordPiece ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•ด ๊ตฌ์„ฑํ•œ ํ•œ๊ธ€ ์ค‘์‹ฌ์˜ ํ† ํฐ ๋ชฉ๋ก์„ ์‚ฌ์ „์œผ๋กœ ์ด์šฉํ•˜์˜€๋‹ค. ์ด๋“ค ๋ชจ๋ธ์€ ๊ฐœ์ฒด๋ช… ์ธ์‹, ์งˆ์˜์‘๋‹ต, ๋ฌธ์žฅ ์œ ์‚ฌ๋„ ํŒ๋‹จ, ๊ฐ์ • ๋ถ„์„ ๋“ฑ์˜ ๋‹ค์–‘ํ•œ ํ•œ๊ตญ์–ด ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋ฌธ์ œ์— ์ ์šฉ๋˜์–ด ์šฐ์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๋ณด๊ณ ํ–ˆ๋‹ค. ๋˜ํ•œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” BERT ๋ชจ๋ธ์— ๊ฐ์ • ์ž์งˆ์„ ์ถ”๊ฐ€ํ•˜์—ฌ ๊ทธ๊ฒƒ์ด ๊ฐ์ • ๋ถ„์„์— ํŠนํ™”๋œ ๋ชจ๋ธ๋กœ์„œ ํ™•์žฅ๋œ ๊ธฐ๋Šฅ์„ ํ•˜๋„๋ก ํ•˜์˜€๋‹ค. ๊ฐ์ • ์ž์งˆ์„ ํฌํ•จํ•˜์—ฌ ๋ณ„๋„์˜ ์ž„๋ฒ ๋”ฉ ๋ชจ๋ธ์„ ํ•™์Šต์‹œ์ผฐ๋Š”๋ฐ, ์ด๋•Œ ๊ฐ์ • ์ž์งˆ์€ ๋ฌธ์žฅ ๋‚ด์˜ ๊ฐ ํ† ํฐ์— ํ•œ๊ตญ์–ด ๊ฐ์ • ๋ถ„์„ ์ฝ”ํผ์Šค (KOSAC)์— ๋Œ€์‘ํ•˜๋Š” ๊ฐ์ • ๊ทน์„ฑ(polarity)๊ณผ ๊ฐ•๋„(intensity) ๊ฐ’์„ ๋ถ€์—ฌํ•œ ๊ฒƒ์ด๋‹ค. ๊ฐ ํ† ํฐ์— ๋ถ€์—ฌ๋œ ์ž์งˆ์€ ๊ทธ ์ž์ฒด๋กœ ๊ทน์„ฑ ์ž„๋ฒ ๋”ฉ๊ณผ ๊ฐ•๋„ ์ž„๋ฒ ๋”ฉ์„ ๊ตฌ์„ฑํ•˜๊ณ , BERT๊ฐ€ ๊ธฐ๋ณธ์œผ๋กœ ํ•˜๋Š” ํ† ํฐ ์ž„๋ฒ ๋”ฉ์— ๋”ํ•ด์ง„๋‹ค. ์ด๋ ‡๊ฒŒ ๋งŒ๋“ค์–ด์ง„ ์ž„๋ฒ ๋”ฉ์„ ํ•™์Šตํ•œ ๊ฒƒ์ด ๊ฐ์ • ์ž์งˆ ๋ชจ๋ธ(sentiment-combined model)์ด ๋œ๋‹ค. KR-BERT์™€ ๊ฐ™์€ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ๋ชจ๋ธ ๊ตฌ์„ฑ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ๊ฐ์ • ์ž์งˆ์„ ๊ฒฐํ•ฉํ•œ ๋ชจ๋ธ์ธ KR-BERT-KOSAC๋ฅผ ๊ตฌํ˜„ํ•˜๊ณ , ์ด๋ฅผ GitHub์„ ํ†ตํ•ด ๋ฐฐํฌํ•˜์˜€๋‹ค. ๋˜ํ•œ ๊ทธ๋กœ๋ถ€ํ„ฐ ํ•™์Šต ๊ณผ์ • ๋‚ด ์–ธ์–ด ๋ชจ๋ธ๋ง๊ณผ ๊ฐ์ • ๋ถ„์„ ๊ณผ์ œ์—์„œ์˜ ์„ฑ๋Šฅ์„ ์–ป์€ ๋’ค KR-BERT์™€ ๋น„๊ตํ•˜์—ฌ ๊ฐ์ • ์ž์งˆ ์ถ”๊ฐ€์˜ ํšจ๊ณผ๋ฅผ ์‚ดํŽด๋ณด์•˜๋‹ค. ๋˜ํ•œ ๊ฐ์ • ์ž์งˆ ์ค‘ ๊ทน์„ฑ๊ณผ ๊ฐ•๋„ ๊ฐ’์„ ๊ฐ๊ฐ ์ ์šฉํ•œ ๋ชจ๋ธ์„ ๋ณ„๋„ ๊ตฌ์„ฑํ•˜์—ฌ ๊ฐ ์ž์งˆ์ด ๋ชจ๋ธ ์„ฑ๋Šฅ ํ–ฅ์ƒ์— ์–ผ๋งˆ๋‚˜ ๊ธฐ์—ฌํ•˜๋Š”์ง€๋„ ํ™•์ธํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ๋‘ ๊ฐ€์ง€ ๊ฐ์ • ์ž์งˆ์„ ๋ชจ๋‘ ์ถ”๊ฐ€ํ•œ ๊ฒฝ์šฐ์—, ๊ทธ๋ ‡์ง€ ์•Š์€ ๋‹ค๋ฅธ ๋ชจ๋ธ๋“ค์— ๋น„ํ•˜์—ฌ ์–ธ์–ด ๋ชจ๋ธ๋ง์ด๋‚˜ ๊ฐ์ • ๋ถ„์„ ๋ฌธ์ œ์—์„œ ์„ฑ๋Šฅ์ด ์–ด๋Š ์ •๋„ ํ–ฅ์ƒ๋˜๋Š” ๊ฒƒ์„ ๊ด€์ฐฐํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ด๋•Œ ๊ฐ์ • ๋ถ„์„ ๋ฌธ์ œ๋กœ๋Š” ์˜ํ™”ํ‰์˜ ๊ธ๋ถ€์ • ์—ฌ๋ถ€ ๋ถ„๋ฅ˜์™€ ๋Œ“๊ธ€์˜ ์•…ํ”Œ ์—ฌ๋ถ€ ๋ถ„๋ฅ˜๋ฅผ ํฌํ•จํ•˜์˜€๋‹ค. ๊ทธ๋Ÿฐ๋ฐ ์œ„์™€ ๊ฐ™์€ ์ž„๋ฒ ๋”ฉ ๋ชจ๋ธ์„ ์‚ฌ์ „ํ•™์Šตํ•˜๋Š” ๊ฒƒ์€ ๋งŽ์€ ์‹œ๊ฐ„๊ณผ ํ•˜๋“œ์›จ์–ด ๋“ฑ์˜ ์ž์›์„ ์š”๊ตฌํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋น„๊ต์  ์ ์€ ์‹œ๊ฐ„๊ณผ ์ž์›์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ชจ๋ธ ๊ฒฐํ•ฉ ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•œ๋‹ค. ์ ์€ ์ˆ˜์˜ ์ธ์ฝ”๋” ๋ ˆ์ด์–ด, ์–ดํ…์…˜ ํ—ค๋“œ, ์ ์€ ์ž„๋ฒ ๋”ฉ ์ฐจ์› ์ˆ˜๋กœ ๊ตฌ์„ฑํ•œ ๊ฐ์ • ์ž์งˆ ๋ชจ๋ธ์„ ์ ์€ ์Šคํ… ์ˆ˜๊นŒ์ง€๋งŒ ํ•™์Šตํ•˜๊ณ , ์ด๋ฅผ ๊ธฐ์กด์— ํฐ ๊ทœ๋ชจ๋กœ ์‚ฌ์ „ํ•™์Šต๋˜์–ด ์žˆ๋Š” ์ž„๋ฒ ๋”ฉ ๋ชจ๋ธ๊ณผ ๊ฒฐํ•ฉํ•œ๋‹ค. ๊ธฐ์กด์˜ ์‚ฌ์ „ํ•™์Šต๋ชจ๋ธ์—๋Š” ์ถฉ๋ถ„ํ•œ ์–ธ์–ด ๋ชจ๋ธ๋ง์„ ํ†ตํ•ด ๋‹ค์–‘ํ•œ ์–ธ์–ด ์ฒ˜๋ฆฌ ๋ฌธ์ œ๋ฅผ ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ๋Š” ๋ณดํŽธ์ ์ธ ๊ธฐ๋Šฅ์ด ๊ธฐ๋Œ€๋˜๋ฏ€๋กœ, ์ด๋Ÿฌํ•œ ๊ฒฐํ•ฉ์€ ์„œ๋กœ ๋‹ค๋ฅธ ์žฅ์ ์„ ๊ฐ–๋Š” ๋‘ ๋ชจ๋ธ์ด ์ƒํ˜ธ์ž‘์šฉํ•˜์—ฌ ๋” ์šฐ์ˆ˜ํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ ๋Šฅ๋ ฅ์„ ๊ฐ–๋„๋ก ํ•  ๊ฒƒ์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๊ฐ์ • ๋ถ„์„ ๋ฌธ์ œ๋“ค์— ๋Œ€ํ•œ ์‹คํ—˜์„ ํ†ตํ•ด ๋‘ ๊ฐ€์ง€ ๋ชจ๋ธ์˜ ๊ฒฐํ•ฉ์ด ํ•™์Šต ์‹œ๊ฐ„์— ์žˆ์–ด ํšจ์œจ์ ์ด๋ฉด์„œ๋„, ๊ฐ์ • ์ž์งˆ์„ ๋”ํ•˜์ง€ ์•Š์€ ๋ชจ๋ธ๋ณด๋‹ค ๋” ์ •ํ™•ํ•œ ์˜ˆ์ธก์„ ํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€๋‹ค.1 Introduction 1 1.1 Objectives 3 1.2 Contribution 9 1.3 Dissertation Structure 10 2 Related Work 13 2.1 Language Modeling and the Attention Mechanism 13 2.2 BERT-based Models 16 2.2.1 BERT and Variation Models 16 2.2.2 Korean-Specific BERT Models 19 2.2.3 Task-Specific BERT Models 22 2.3 Sentiment Analysis 24 2.4 Chapter Summary 30 3 BERT Architecture and Evaluations 33 3.1 Bidirectional Encoder Representations from Transformers (BERT) 33 3.1.1 Transformers and the Multi-Head Self-Attention Mechanism 34 3.1.2 Tokenization and Embeddings of BERT 39 3.1.3 Training and Fine-Tuning BERT 42 3.2 Evaluation of BERT 47 3.2.1 NLP Tasks 47 3.2.2 Metrics 50 3.3 Chapter Summary 52 4 Pre-Training of Korean BERT-based Model 55 4.1 The Need for a Korean Monolingual Model 55 4.2 Pre-Training Korean-specific BERT Model 58 4.3 Chapter Summary 70 5 Performances of Korean-Specific BERT Models 71 5.1 Task Datasets 71 5.1.1 Named Entity Recognition 71 5.1.2 Question Answering 73 5.1.3 Natural Language Inference 74 5.1.4 Semantic Textual Similarity 78 5.1.5 Sentiment Analysis 80 5.2 Experiments 81 5.2.1 Experiment Details 81 5.2.2 Task Results 83 5.3 Chapter Summary 89 6 An Extended Study to Sentiment Analysis 91 6.1 Sentiment Features 91 6.1.1 Sources of Sentiment Features 91 6.1.2 Assigning Prior Sentiment Values 94 6.2 Composition of Sentiment Embeddings 103 6.3 Training the Sentiment-Combined Model 109 6.4 Effect of Sentiment Features 113 6.5 Chapter Summary 121 7 Combining Two BERT Models 123 7.1 External Fusing Method 123 7.2 Experiments and Results 130 7.3 Chapter Summary 135 8 Conclusion 137 8.1 Summary of Contribution and Results 138 8.1.1 Construction of Korean Pre-trained BERT Models 138 8.1.2 Construction of a Sentiment-Combined Model 138 8.1.3 External Fusing of Two Pre-Trained Models to Gain Performance and Cost Advantages 139 8.2 Future Directions and Open Problems 140 8.2.1 More Training of KR-BERT-MEDIUM for Convergence of Performance 140 8.2.2 Observation of Changes Depending on the Domain of Training Data 141 8.2.3 Overlap of Sentiment Features with Linguistic Knowledge that BERT Learns 142 8.2.4 The Specific Process of Sentiment Features Helping the Language Modeling of BERT is Unknown 143 Bibliography 145 Appendices 157 A. Python Sources 157 A.1 Construction of Polarity and Intensity Embeddings 157 A.2 External Fusing of Different Pre-Trained Models 158 B. Examples of Experiment Outputs 162 C. Model Releases through GitHub 165Docto

    Natural Language Processing: Emerging Neural Approaches and Applications

    Get PDF
    This Special Issue highlights the most recent research being carried out in the NLP field to discuss relative open issues, with a particular focus on both emerging approaches for language learning, understanding, production, and grounding interactively or autonomously from data in cognitive and neural systems, as well as on their potential or real applications in different domains
    corecore